Wednesday, September 21, 2011

How to run a method 10 minutes after startup

I had this use-case where I had to run a method 10 minutes after the tomcat start-up, but it should run only once. My first idea was to simply use a @Scheduled task, like all other tasks we have and just run it once. The problem is that there is no way to use annotation based scheduling and run a task only once. So I came up with the following solution:
@Service
public class SomeServiceImpl implements SomeService, InitializingBean {
  @Autowired
  private MyTask myTask;

  @Override
  public void afterPropertiesSet() {
    myTask.doSomethingAfterTenMinutes();
  }
}
@Component
public class MyTaskImpl implements MyTask {
  @Async
  @Override
  public void doSomethingAfterTenMinutes() {
    try {
      Thread.sleep(10000);
    } catch (InterruptedException e) {
      //error handling
    }
    //do your task
}
This starts a asynchronous task after a particular bean initialized. That asynchronous task just waits 10 minutes and than runs whatever it needs to do. When all that worked fine and good, I ran into unexpected troubles.
I started to notice deployments on our test servers sporadically stopped working. After some investing I noticed tomcat didn't stop properly, resulting in multiple running tomcat processes. I started searching for  the origin of this problem and ran into this task. It happened only when deploying within that 10 minute window after a deploy.
The task is initialized during spring's initializing phase of the context, but before tomcat is started. Normally the container will know when you are done starting and marks it done. Apparently this didn't happen anymore when starting a new Thread during the initialization phase. Tomcat wouldn't stop after giving it the command because it was thinking it is still starting up.

A better solution was found quickly in just using the xml based scheduling api:
<bean id="myTask" class="nl.peecho.task.MyTaskImpl" />
	
<bean id="jobDetail" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
  <property name="targetObject" ref="myTask" />
  <property name="targetMethod" value="doSomethingAfterTenMinutes" />
  <property name="concurrent" value="false" />
</bean>

<bean id="simpleTrigger" class="org.springframework.scheduling.quartz.SimpleTriggerBean">
  <property name="jobDetail" ref="jobDetail" />
  <property name="startDelay" value="600000" /> <!-- 10 minutes -->
  <property name="repeatCount" value="0"/>
</bean>
	
<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
  <property name="triggers">
    <list>
      <ref bean="simpleTrigger" />
    </list>
  </property>
</bean>
Lessons learned;
- Thread.sleep is evil; always.
- When running a single run task, use the xml based schedulers.


Minimizing downtime on Amazon AWS

Last week I wrote an article for Peecho about our infrastructure, here is a snippet and the link to the entire article:

As we argued in another article, being fast is the secret to scalability. Automation makes you speedy. It helps ruling out as many commodity tasks as possible. So, we decided to share a few tricks about increasing your uptime with automation.

Downtime is bad. It moves your focus away from creating your awesome product to the arduous task of fixing broken things. It is a complete waste of time, whether it is because of unexpected outages, crashes or your own software update procedure.

Our cloud printing company Peecho runs on Amazon Web Services. Every week, we deploy multiple new versions of our entire system. Still, our Pingdom statistics show a 99.96% uptime over the past year. The following write-up shows our efforts to minimize downtime with AWS, based on some best practices and an automated deployment procedure of instances within an auto-scaling group.

Read my entire article here

Thursday, September 08, 2011

Scalability on a shoestring

A couple of months ago I wrote an article about Peecho together with Sander Nagtegaal for Highscalability.com.

We are a start-up, so the most important thing that we considered before we started was simply money - or rather, the lack thereof. Although we required some serious firepower, the fully operational system should cost no more than a few hundred bucks a month. This article explains how we did it.

Read the entire article on Peecho Architecture - Scalability On A Shoestring.


Thursday, November 18, 2010

S3 File upload with Java and Spring's RestTemplate

At Peecho, we use many of the Amazon AWS services. For example, we use EC2 for our virtual machines and S3 for all of our storage. Because of the scalable nature of S3, theoretically, we could serve an infinite amount of users uploading files to our platform without stressing our machines or infrastructure at all. The only drawback is that connected apps will have to upload their files directly to S3 - which can be challenging at times. That's why I'm writing this blog.

First of all, the Spring RestTemplate class is awesome. It is a really neat and easy way to create requests to restful web services or even not so restful services. The cool thing is that you can configure marshallers on the template, which will automatically convert outgoing and incoming objects into XML, Json and more. For example, you can configure an XStreamMarshaller to marshall all outgoing objects into XML and all incoming XML into objects this way.

Uploading to S3 can be really easy if you use one of the many libraries that Amazon provides for the different platforms like Java, .Net, PHP etcetera. These libraries have easy-to-use methods to upload files to buckets, creating objects and setting policies. To make use of all this, you need an Amazon public and secret key, which is fine if you are uploading to your own S3 account. We need our customers to be able to upload to our S3 account and naturally we can't give our customers the secret key to our amazon account because they could do all kinds of nasty evil stuff with it.

Luckily, Amazon provides us with a way to upload files to S3 using a pre-signed url. This url contains a base64 encoded policy file, some paths to the data and a signed hash of the entire url - using your secret key. The policy files specify exactly what and where you can upload your data. For example, it specifies you can only upload *.jpg files to the /user-data/username/* path in S3. This file is generated on our server, using our secret key. This way customers of our API can only upload in directories that we specify and tampering with other customer's files is impossible. Doing browser based uploads using a pre-signed url is explained in this S3 article.

Now, we have a signed url to post to and a valid policy file - but we still need to actually upload the data. This is where the rest template comes in. S3 expects a multi-part form post, instead of a normal file upload. Luckily there is a MessageConverter in Spring to create multi-part form posts! Configure it in your application context like this:

<bean id="restTemplate" class="org.springframework.web.client.RestTemplate">
 <property name="messageConverters">
  <list>
   <bean class="org.springframework.http.converter.StringHttpMessageConverter" />
   <bean class="org.springframework.http.converter.FormHttpMessageConverter" />
  </list>
 </property>
</bean>

The FormHttpMessageConverter makes it possible to create a multi-part form post. In your Java code you can now create the request:

MultiValueMap<String, Object> form = new LinkedMultiValueMap<String, Object>(); form.add("key", objectKey);
form.add("acl", "private");
form.add("Content-Type", "image/jpeg");
form.add("AWSAccessKeyId", awsAccessKeyId);
form.add("Policy", serverGeneratedPolicy);
form.add("Signature", serverGeneratedSignature);
form.add("Filename", "");
form.add("success_action_status", "201");
form.add("file", new FileSystemResource(file));
restTemplate.postForLocation(signedPutUrl, form);

When just providing a map with only strings, the converter will convert it into a normal form post. However, when adding a file to the map, the converter automatically makes it a multi-part form post. The Filename parameter of the form is set to an empty string, which means amazon S3 will use the filename of the uploaded file as the filename of the object in S3.

Well that is pretty much it - it can't get much easier, right? :-)

Thursday, November 11, 2010

Eclipse open implementation & open interface.

Tired to always use CTR+T on a method to select the first implementation (even if there is only 1) with your keys and open it? There is a plugin for this:

Get it from the eclipse update site (feature: implementors): Alternatively, use the Eclipse update site at: http://eclipse-tools.sourceforge.net/updates/

Or from the homepage of the project itself: http://eclipse-tools.sourceforge.net/implementors/download.html

It does 2 things:
- Open Implementation (jumps to the declaration of the implementing method when activated on a method call on a method on an interface. If invoked on an interface type itself, it jumps to the implementation class.)
- Open Interface (jumps to an interface implemented by the class containing the selected method where the interface declares the method. If invoked on an class itself, it jumps to the interface type.)

The default bindings are ALT-F3 and CTR+ALT+F3, but I changed it to F3 itself to open the implemention, because I always want to go to the implementation directly and not to the interface. Also if there isn't an interface, it just goes to the class. This is a really handy shortcut!

I tested it on eclipse 1.6 and works fine :)

Wednesday, October 27, 2010

Check if url exists.

Lets do a quick blogpost about something that really isn't that hard, but i've never done it before, so some sample code could speed up other people that want to do the same.
If there are better ways to do this, please let me know. The idea here is to check if a file on the web exists before actually downloading it, lets say a PDF file. Someone submits a url to a PDF file, but you want to process it a-sync, the least thing you can do before creating the processing-ticket is to check if the file exists or not. You can report right back to the submitter in case the url does not exists or it returns an error.


private boolean urlExists(String urlString) {
 try {
   HttpURLConnection.setFollowRedirects(true);
   URL u = new URL(urlString);
   HttpURLConnection huc = (HttpURLConnection) u.openConnection();
   huc.setRequestMethod("GET");
   huc.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2 (.NET CLR 3.5.30729)");
   huc.connect();
   int responseCode = huc.getResponseCode();
   return responseCode == HttpURLConnection.HTTP_OK;
  } catch (MalformedURLException e) {
   //add logging here
  } catch (IOException e) {
   //add logging here
  }
 return false;
 }


The user-agent setting is to trick the servers you are a valid 'browser', some servers might redirect you because you have a unknown browser version.

Make sure you do only a "HEAD" request, not a "GET".. Actually we ran into throubles with the HEAD method, because the files we want to check are in amazon's S3 storage and amazon forbits using HEAD. Using GET stil won't download the entire file, just retrieves a response code.

Friday, March 19, 2010

The ISWFContext and Embedded fonts in the Text Layout Framework.

In our product we are making intense use of the new Text Layout Framework. This framework is still in beta, and with beta versions come problems. This week after updating to a new version, our embedded fonts stopped working all of the sudden. I really digged deep in this one and tried to narrow it down as much as possible. The problem occurs when using a mixture of embedded fonts, the TLF framework and spark components that internally use a RichEditableText.

Check out the sample code at the bottom of the post. This is the complete code that already has the workarounds in it. Remove the 'option 1' and 'option 2' lines and the "s:NumericStepper" spark component. You will see the code will work properly. If you now add the "s:NumericStepper", the embedded font stops rendering properly.

Now this really sounds like a bug in the spark components to me. Adding a random spark component (using a RichEditableText) somewhere in the application, breaks the rendering of the embedded fonts. I have discussed this on the Adobe forums, but they don't agree, they are suggesting I simply "Are missing an ISWFContext somewhere". Here is the discussion on the forum. As Alex Harui describes in this blog, TextLines should be created with a specified ISWFContext, the swf context should be the same context as the embedded font's swf.

The question remains, why should I, as a user of the SDK, know about these internal workings of the SDK and the Text Layout Framework? I don't care about how and when in which context the fonts are loaded, because I am not creating the TextLines, the TLF is (ie the ContainerController and FlowComposer). But somehow I still have to specify the correct swf context.

After debugging through the SDK code I finally got to two workarounds. I call them workarounds because I'm still not satisfied with the solution. I posted them again on the forum, but didn't get any suggestions on how to do a better fix.

Option 1 simply removes the function of a global setting which should return wether to use a embedded font lookup or a device font lookup. Removing the function (set by a RichEditableText) fixes my problem with the embedded font. I'm not sure why, but basicly returning the embedded font lookup method would fix it, maybe that is the default? I am not sure. Another option would be to create a function that returns the embedded font lookup option every time. While this fixes the problem, it still isn't using the ISWFContext.

Option 2 actually adds the ISWFContext to the flowComposer. While this seems the way to go, it leaves me with some questions. For example, a textFlow has 1 flowComposer. You can only set 1 ISWFContext to the flowComposer. But a textFlow can have multiple paragraphs with multiple fonts embedded from different swf's. Thus what ISWFContext to set on the flowComposer? You can never set the 'correct' ISWFContext when there are multiple fonts used from different swf contexts.

Hopefully Adobe fixes these problems so the users of the SDK don't have to worry about this level of stuff. Usually Adobe does a great job though, keeping the internal details of the SDK far away from users.


<?xml version="1.0" encoding="utf-8"?>
<s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" 
                  xmlns:s="library://ns.adobe.com/flex/spark" 
                  xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="1024" minHeight="768" xmlns:mx1="library://ns.adobe.com/flex/halo"
                  creationComplete="onCreationComplete()">
     
     <fx:Style> 
          @font-face { 
               embedAsCFF: true; 
               fontFamily: CourierCFF; 
               src: url(c:/windows/fonts/COUR.ttf); 
               fontStyle: normal; 
               fontWeight: normal; 
          } 
     </fx:Style> 
     
     <fx:Script>
          <![CDATA[
               import flashx.textLayout.compose.ISWFContext;
               import flashx.textLayout.formats.ITextLayoutFormat;
               import flashx.textLayout.elements.GlobalSettings;
               import flash.text.engine.FontLookup;
               import flash.text.engine.RenderingMode;
               import flashx.textLayout.elements.TextFlow;
               import flashx.textLayout.elements.SpanElement;
               import flashx.textLayout.elements.ParagraphElement;
               import flashx.textLayout.container.ContainerController;
               import flashx.textLayout.formats.TextLayoutFormat;
               import flashx.textLayout.elements.Configuration;
               import flashx.textLayout.compose.IFlowComposer;
               import flash.text.engine.FontLookup;
               
               use namespace mx_internal;
               
               private function onCreationComplete():void { 
                    createTextFlow();
               }
               
               private function createTextFlow():TextFlow {
                    var config:Configuration = new Configuration();
                    var textLayoutFormat:TextLayoutFormat = new TextLayoutFormat();
                    textLayoutFormat.fontFamily = CourierCFF;
                    textLayoutFormat.fontLookup = FontLookup.EMBEDDED_CFF;
                    textLayoutFormat.renderingMode = RenderingMode.CFF;
                    
                    config.textFlowInitialFormat = textLayoutFormat;
                    var textFlow:TextFlow = new TextFlow(config);
                    
                    var p:ParagraphElement = new ParagraphElement();
                    var span:SpanElement = new SpanElement();
                    span.text = Is dit courier?;
                    p.addChild(span);
                    textFlow.addChild(p);
                    
                    var flowComposer:IFlowComposer = textFlow.flowComposer;
                    
                    //option 1: This fixes the problem, but i'm not sure why it works without a fontLookupFunction? Does it default to embedded?
                    //you could also specify your own function always returning embedded, but that would just resolve to the same behaviour..
                    GlobalSettings.resolveFontLookupFunction = null;
                    
                    //option 2: I guess this is the more proper solution, specifying the swfContext. Only what context to choose? 
                    //every font has its own css/swf and thus context. A paragraph can select a font (and the bold/italic options), 
                    //which font to choose here if there are multiple fonts used in the textflow?
                    textFlow.flowComposer.swfContext = ISWFContext(this.getFontContext(CourierCFF, false, false, FontLookup.EMBEDDED_CFF));
                    
                    var cc:ContainerController = new ContainerController( mainText, 200, 200 );
                    flowComposer.addController( cc );
                    flowComposer.updateAllControllers();
                    return textFlow; 
               }
          ]]>
     </fx:Script>
     
     <s:VGroup>
          <s:NumericStepper /> <!-- remove this stepper and the embedded font will work without option 1 or 2 -->
          <mx:UIComponent id="mainText"/> 
     </s:VGroup>
</s:Application>

Wednesday, January 06, 2010

Embedding fonts, Unicode ranges and the horizontal tabular character...

I ran into this very strange bug in the flash player while working on a project. The problem appeared when using flex 4's TLF (text layout framework) with embedded fonts (using cff).

In other projects I added the unicode ranges to the css file like lots other blogs describe (like: http://blog.flexexamples.com/2007/08/07/specifying-certain-unicode-ranges-for-embedded-fonts/), you do want to set the unicode ranges you need on the fonts you embed to save lots of otherwise wasted memory space.

In our other projects this worked without any problems, but in combination of TLF suddenly weird stuff was happening. When having a sentence with lots of spaces, like "a a a a a a a a a a a a a a a a a a a a a a a a", words start to disappear in the beginning of the sentence. The more words and spaces your add, the more words start disappearing :-/

I created a bug back in November on Adobe's bugtracker (https://bugs.adobe.com/jira/browse/FP-3082) but didn't get any response from Adobe about this.

A couple of days ago I decided to take another shot on this issue and thanks to a colleague with a couple of good idea's I came to the following conclusion.

The problem occurs with this unicode range:
unicodeRange:
U+0040-U+007E;
}

The problem is solved with this unicode range:
unicodeRange:
U+0009-U+0009,
U+0040-U+007E;
}

The unicode U+0009 is the 'horizontal tabular character'. I really don't know how the internals of the flash player are creating tab's over my characters or something, but at least the solution works.

Tuesday, December 01, 2009

Java, ImageMagick and Runtime.getRuntime().exec(command)

Today I needed a to show a 50MB EPS file in the flash player on the client. I chose ImageMagick (a c++ image manipulation library) to convert the image to a jpeg because there really aren't any good java libraries who can do this kind of stuff without draining all of your resources on the server.

ImageMagick wasn't hard to install at all, the guide on http://www.imagemagick.org is pretty clear and self explanatory.

There are only 2 ImageMagick wrappers for java: JMagick and Im4Java. Both are poorly documented and hardly used at all. JMagick uses a JNI connection through some dll you have to install, Im4Java uses the java Runtime.getRuntime().exec command to get it to work, on the downside; the library does not work on Windows without modification of the source code.

Anyway, both didn't seem mature enough (or working at all) to use in a production environment, so I decided to do it myself.

Getting it to work was fairly easy using the exec command. I omitted the try-catch clauses for clarity in the code following. The command variable is a String[] of parameters including imagemagick's convert command. ie as a string:
"/usr/bin/convert -geometry 100x100 -quality 75 in.gif out.jpg"

proc = Runtime.getRuntime().exec(command);
exitStatus = proc.waitFor();

This worked fine untill I tried my 50MB EPS file. The result was that the convert.exe command on my machine never exited and the java application just waited at proc.waitFor() forever.

After searching through the internet and javadocs I discovered this:

"Because some native platforms only provide limited buffer size for standard input and output streams, failure to promptly write the input stream or read the output stream of the subprocess may cause the subprocess to block, and even deadlock."

There is a pretty good blog on this subject on javaworld: http://www.javaworld.com/javaworld/jw-12-2000/jw-1229-traps.html?page=1

The solution is to continuously read the output streams, you could do this using a while loop or u can use StreamGobbler:
"A stream gobbler pipes ("gobbles") an input stream to an output stream."
API: http://www.is.informatik.uni-duisburg.de/projects/jayspirit/javadoc/hyspirit/util/StreamGobbler.html
JAR: http://www.findjar.com/jar/ch.ethz.ganymed/jars/ganymed-ssh2-build209.jar.html

This fixed my problem, no more deadlocks :-) Here is the complete code used:

private boolean convert(File in, File out, int width, int height) {
LOGGER.info("convert(" + in.getPath() + ", " + out.getPath() + ", " + width + "x" + height);

ArrayList command = new ArrayList(10);

command.add(imageMagickConvertCommand);
command.add("-geometry");
command.add(width + "x" + height);
command.add("-quality");
command.add("" + JPGQUALITY);
command.add(in.getAbsolutePath());
command.add(out.getAbsolutePath());

return exec((String[]) command.toArray(new String[1]));
}

private boolean exec(String[] command) {
Process proc;

try {
proc = Runtime.getRuntime().exec(command);
} catch (IOException e) {
System.out.println("IOException while trying to execute " + command);
return false;
}

//streamgobbler consumes any error or input streams to prevent the process from hanging.
new StreamGobbler(proc.getErrorStream());
new StreamGobbler(proc.getInputStream());

int exitStatus = -1;
try {
exitStatus = proc.waitFor();
} catch (InterruptedException e) {
LOGGER.error(e.getMessage(), e);
}

if (exitStatus != 0) {
LOGGER.error("Error executing command: " + exitStatus);
}
return exitStatus == 0;
}

Monday, November 16, 2009

Flex Accessibility Option

A couple of days ago I updated to the latest nightly build version of the flex sdk (4.0.0.11686) because of some bugs I encountered. After updating my code everything seemed to run smoothly and the bugs I previously had where fixed in the nightly build.

After a run through our build server and a deploy to the test environment, the application seemed to have stopped working. The only clou I got was from the debug player which rapported the following error:

TypeError: Error #1009: Cannot access a property or method of a null object reference. at spark.accessibility::PanelAccImpl$/http://www.adobe.com/2006/flex/mx/internal::createAccessibilityImplementation()[E:\dev\trunk\frameworks\projects\spark\src\spark\accessibility\PanelAccImpl.as:76] at spark.components::Panel/initializeAccessibility()[E:\dev\trunk\frameworks\projects\spark\src\spark\components\Panel.as:457]

My first guess was that something had changed in the RSLs which I forgot to update. Something weird is happing here, because the RSL-swfs in the SDK are named like framework_4.0.0.11686.swf, but after enabling the RSLs in flash-builder, flash-builder itself generates (copies) the swfs to the bin-debug dir but named like framework_4.0.0.0.swf and the default flex-config.xml also uses the 0.0.swf notation by default..

Anywayz, it turned out it had nothing to do with it anyway. The problems was a setting called 'accessible' in the flex-config.xml, normally this setting is false by default. Also flash-builder overrides this setting default to false. So it seems like the application was running fine from flash-builder, but when compiled with ant it used different settings. Also this setting was always false in the former releases of the SDK.

<!-- Turn on generation of accessible SWFs. -->
<accessible>false</accessible>

Flashbuilder screenshot of the accessible setting:








I don't know if Adobe made this change permanent or its just for development in the nightly builds or something.

About accessibility: http://www.yswfblog.com/blog/2008/07/22/accessible-trueflash-and-flex-accessibility-docs/