Monday, December 31, 2012

Download YouTube video and convert it for Android

For downloading videos from YouTube I am using YouTubeDownloader2 and you can find it here http://sourceforge.net/projects/ytd2/. Why I am using that and not using Firefox add-ons? YouTubeDownloader2 is standalone program written in Java which is not storing my browsing habits in some remote DB. How do I know that? That is OSS, downloaded code, checked what it does, compiled it. If you don’t do Java, trust others who do ;-)
Once video is on file system we can encode it. It should be easy on Linux unless you are have common repo FFmpeg which is compiled without support for H.264, mp3 and few others codecs. If you are experienced Linux user you will visit FFmpeg wiki where is detailed instruction https://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide how to configure and build FFmpeg. I am on Mint so that is valid set of instructions for me, people using other distro will look for their set of instructions. If you are not experienced user maybe mencoder which comes by default with everything enabled is better option. So, install libavcodec-extra-53 or maybe 54 and mencoder. We can ask mencoder what is supported if in terminal we execute:

~ $ mencoder -ovc help
MEncoder svn r34540 (Ubuntu), built with gcc-4.6 (C) 2000-2012 MPlayer Team

Available codecs:
copy     - frame copy, without re-encoding. Doesn't work with filters.
frameno  - special audio-only file for 3-pass encoding, see DOCS.
raw      - uncompressed video. Use fourcc option to set format explicitly.
nuv      - nuppel video
lavc     - libavcodec codecs - best quality!
xvid     - XviD encoding
x264     - H.264 encoding


and it produces video codecs list. For audio, query looks like this:

~ $ mencoder -oac help
MEncoder svn r34540 (Ubuntu), built with gcc-4.6 (C) 2000-2012 MPlayer Team

Available codecs:
   copy     - frame copy, without re-encoding (useful for AC3)
   pcm      - uncompressed PCM audio
   mp3lame  - cbr/abr/vbr MP3 using libmp3lame
   lavc     - FFmpeg audio encoder (MP2, AC3, ...)
   faac     - FAAC AAC audio encoder


There is small problem with mencoder, there is no WinFF for it and one have to do quite a bit of reading to assemble proper set of parameters. To save you from learning, here is what I do:

~ $ mencoder -of lavf -lavfopts format=mp4 -oac lavc -ovc lavc -lavcopts vbitrate=480:aglobal=1:vglobal=1:acodec=libfaac:abitrate=64:vcodec=mpeg4:keyint=25 -ofps 15 -af lavcresample=44100 -vf harddup,scale=480:320 -mc 0 "/home/yourlogin/in.flv" -o "/home/yourlogin/out.mp4"

I want H.263 video and AAC audio in MP4 container. Video should have bitrate up to 480kb/s and must be resized to 480x320 pixels, also frame rate is 15 frames per second.  For audio requested was resampling at 44.1 KHz and bit rate up to 64kb/s. It is not requirement that input video must be FLV, it should work for any container which you can find at YouTube. If your screen is different from 480x320, you may want to resize video differently. For example for 320x240 screen, vbitrate=256 and scale=320:240 should be good choice.
Not so nice as WinFF but not very complicated either.

Saturday, December 15, 2012

How to stack images using G’MIC

After chat on Astophotography Google+ community https://plus.google.com/u/0/communities/118208937662082340807 I concluded that it may be enough interest for 16 bit image processing using G’MIC tutorial and here it is.
G'MIC stands for GREYC's Magic Image Converter and we know it as plugin for GIMP. It is less known that G'MIC framework can be used on it’s own through simple shell scripting or possible C++ or web interface. What is very important it supports 16 bits integers per channel and that is something what GIMP is lacking.
So, what we are going to do here is to take a look at G’MIC tutorial http://gmic.sourceforge.net/tutorial.shtml and load images into stack average them and save them as 16 bit TIFF.
Tutorial says if you are using G’MIC as GIMP plugin and set logging to verbose it will print what it does.

I prefer writing output to log file to suggested running GIMP from terminal (command line for Windows users). Standard path and name for log file is /tmp/gmic_log
Also from tutorial we see that 16 bit processing starts with division with 256 and ends with multiplication with 256. While that represents reduction to 8 bit it is important to note that G’MIC internally works with floating point numbers and there is no loss of precision when we convert it back to 16 bit integers. Once when we are done with processing it is important how we are going to specify TIFF output. G’MIC will pick format from extension but it by default writes 32 bit TIFF. So, to get 8 bit TIFF we need to specify output type uchar and to get 16 bit TIFF we need to specify output type ushort.
Typical operations which we are going to use are:
  1.     -gimp_haar_smoothing 0.1,10,2,0,0 what is Smooth [wavelets] under Enhancements
  2.     -gimp_compose_average 1,0 what is Blend [average]  under Layers
  3.     -gimp_compose_screen 1,0 what is Blend [screen] under Layers
So complete workflow is we pick nicer RAWs export them as 16 bit PPMs using darktable as in http://grumpyoldprogrammer.blogspot.com/2012/11/more-about-stacking.html then we align them using align_image_stack from hugin-tools as in http://grumpyoldprogrammer.blogspot.com/2012/12/how-to-align-image-stack.html and finally we process them with G’MIC.
Averaging layers will produce at the end two layers, if we want we can save them as separate pictures, like this:

$ gmic tif0000.tif tif0001.tif tif0002.tif tif0003.tif -div 256 -gimp_compose_average 1,0 -mul 256 -c 0,65536 -type ushort -output[1] imageA.tiff -output[0] imageB.tiff

or we can stack those two layers in screen mode to boost light and stretch contrast, like this:

$ gmic tif0000.tif tif0001.tif tif0002.tif tif0003.tif -div 256 -gimp_compose_average 1,0 -gimp_compose_screen 1,0 -mul 256 -c 0,65536 -type ushort -output imageS.tiff

If we want to do resizing we can as in G’MIC tutorial use -resize2dx 1600,5 what is bi-cubic resizing to 1600 pixels width.
Instead of averaging we can try other options like -compose_median what is median or any other available filter or combination of filters, we try in GIMP, set logging to verbose and later run it in terminal without loss of data.

Monday, December 10, 2012

How to align image stack

Aligning images in GIMP is not very difficult if we do not have rotation to attend to, if rotation is present we rather leave that task to computer.
Aligning stack of images is not only used in astrophotography, it is common task in creation of HDR images. So, we can use align_image_stack from hugin-tools to align images. Installation procedure on Mint or Ubuntu is simple, we find it:

$ apt-cache search hugin
enblend - image blending tool
enfuse - image exposure blending tool
hugin - panorama photo stitcher - GUI tools
hugin-data - panorama photo stitcher - common data files
hugin-tools - panorama photo stitcher - commandline tools


and after that we install it:

$ sudo apt-get install hugin

It is available from repositories and there is no need to compile it from source. People using different operating system from Linux should visit Hugin website http://hugin.sourceforge.net/download/ and download installer for their operating system.
Once Hugin is installed align_image_stack should be available and we can align image stack. If you do not have own images in this tutorial http://grumpyoldprogrammer.blogspot.com/2012/11/even-more-astrophotography.html you will find download links.
We place in some empty directory JPEGs, if you have RAWs convert them, cd to that directory and execute:

$ align_image_stack -a tif *.JPG

Option -a tif means we want tif indexed output, so after a while we will see tif0000.tif, tif0001.tif and so on. Parameter *.JPG is input list, align all JPEGs. Alternatively we can specify file by file. Once alignment is done we can start GIMP and open all images as layers.



In order to check alignment we can change mode for top layer from Normal to Difference in Layers - Brushes floating window. To check further we make top layer invisible and repeat the same procedure for next layer.



Once we are happy we can stack images as described in previous tutorials or we can use GMI’C plugin. GMI’C is located under Filters and it opens as separate window. We expand Layers, select Average and set Input layers to All and Output mode to New image.



Clicking Apply or OK button will create new image. If you have new version of GIMP Median is also interesting. Average produces two layers and Median only one, we can stack them again. For final processing you may like to do some contrast stretching, maybe some curves as well.

Thursday, December 6, 2012

RESTful web service for picture upload Android client

In previous tutorial http://grumpyoldprogrammer.blogspot.com/2012/11/image-upload-via-restful-web-service.html I described how to create simple server application and deploy it on Tomcat. Now we are going to take a look at client application.
Simplest is to use Apache HttpComponents Client. It is possible to convert HttpMime jar for Android and import it as library. Since not all classes are required for picture upload we can download source and add the following files to project:

AbstractContentBody.java
ByteArrayBody.java
ContentBody.java
ContentDescriptor.java
FormBodyPart.java
Header.java
HttpMultipart.java
HttpMultipartMode.java
MIME.java
MinimalField.java
MultipartEntity.java
StringBody.java


Naturally whole project like this becomes Apache licensed. Some of those Java files are in org.apache.http.entity.mime.content package and others are in org.apache.http.entity.mime.
In this way we avoid compiling two files FileBody.java and InputStreamBody.java
Now will we go refactoring package names to fit them into project is of minor relevance, I usually do.
We create new Android project and we code onCreate like this:

public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    try {
        bm = BitmapFactory.decodeFile("/sdcard/DCIM/FinalM8.JPG");
        if(bm==null)
            throw new Exception("no picture!");
        new FetchItemsTask().execute();
    } catch (Exception e) {
        e.printStackTrace();
    }
}


Please note that image path is hardcoded, so change it accordingly. Bitmap bm is loaded, defined as field, and if it is different from null we create and execute AsyncTask to upload picture. To keep things simple as possible we will implement only doInBackground, like this:

protected Void doInBackground(Void... arg0) {
    // TODO Auto-generated method stub
    try {
        ByteArrayOutputStream bos = new ByteArrayOutputStream();
        bm.compress(CompressFormat.JPEG, 50, bos);
        byte[] data = bos.toByteArray();
        HttpClient httpClient = new DefaultHttpClient();
        HttpPost postRequest =
            new HttpPost("http://localhost:8080/WebApplication3/xyz/generic/images");
        ByteArrayBody bab = new ByteArrayBody(data, "FinalM8.JPG");
        MultipartEntity reqEntity = new MultipartEntity(
                HttpMultipartMode.BROWSER_COMPATIBLE);
        reqEntity.addPart("image", bab);
        postRequest.setEntity(reqEntity);
        HttpResponse response = httpClient.execute(postRequest);
        System.out.println("Status is "+response.getStatusLine());
        BufferedReader reader = new BufferedReader(new InputStreamReader(
                response.getEntity().getContent(), "UTF-8"));
        String sResponse;
        StringBuilder s = new StringBuilder();
        while ((sResponse = reader.readLine()) != null) {
            s = s.append(sResponse);
        }
        System.out.println("Response: " + s);
    } catch (Exception e) {
        // handle exception here
        e.printStackTrace();
    }
    return null;
}


Image is compressed on 50%, loaded into byte array and upload is attempted. In order to run code just push onto emulator some image and change path.

Wednesday, November 28, 2012

Image upload via RESTful web service

During some project I had to write Android client for RESTful web service where between other things, image upload was goal. Server was handled by Zaries http://zaries.wordpress.com/ so it was Lisp on Hunchentoot. I didn’t want to start with two unknowns, implementing problematic multipart/form-data on Android and writing from the scratch Lisp web service, so I started searching for simple Java example of upload RESTful web service. I expected something on NetBeans but closest what I managed to find was Jersey hello world example http://www.mkyong.com/webservices/jax-rs/jersey-hello-world-example/ done using Maven. So, here I am writing user friendly tutorial for user friendly NetBeans for millions of prospective Android developers which are not quite comfortable with Java EE development, yet. Android client will be separate article.
This was done on LinuxMint Maya, IDE is NetBeans 7.2 and deployment target is Tomcat 7.0.27.
We start from New Project dialog where from group Java Web we select Web Application. With exception of Server and Settings where we want to target Apache Tomcat during all other steps we accept defaults. If we downloaded bundle, Glassfish is default target.




Now we want to add RESTful web service. We right click on project icon in left pane and select New -> RESTful Web Services from Patterns ...
Again we accept default values with exception of package name where we type in za.org.droid. Before we start coding we add two libraries, those are Jersey 1.8 and JAX-WS 2.2.6. Inside Projects pane we right click on Libraries folder and select Add Library ...


Now we can delete useless GET and PUT Hello World methods generated by IDE and copy and paste this

@POST
@Path("/images")
@Consumes(MediaType.MULTIPART_FORM_DATA)
public Response imageUpload(@FormDataParam("image") InputStream hereIsImage, @FormDataParam("image") FormDataContentDisposition hereIsName) {
    String path = System.getenv("HOME")+"/tmp/";
    if(hereIsName.getSize()==0) {
        return Response.status(500).entity("image parameter is missing").build();
    }
    String name = hereIsName.getFileName();
    path += name;

    try {
        OutputStream out = new FileOutputStream(new File(path));
        int read;
        byte[] bytes = new byte[1024];
        while ((read = hereIsImage.read(bytes)) != -1) {
            out.write(bytes, 0, read);
        }
        out.flush();
        out.close();
    } catch (IOException e) {
        return Response.status(500).entity(name + " was not uploaded\n"+e.getMessage()).build();
    }
    return Response.status(200).entity(name + " was uploaded").build();
}


We should create in our $HOME folder tmp folder where images will be saved. We look for image parameter and it will tell us what is image called and also it will contain raw image data. We return response informing client about how successful was upload attempt.
Since we accepted default names for RESTful web service it will have “generic” path assigned, that is important because we use that path to call it.
Only what is left is to do configuration. In WEB-INF folder we create web.xml file and paste the following in:




We can save xml and deploy web application. End-point to call will be http://localhost:8080/WebApplication3/xyz/generic/images, application name WebApplication3 may be different so please change it accordingly. To test upload one can use HttpClient, httpcomponents-client-4.2.1 contains working example. I will add blog about Android client in day or two.

Sunday, November 25, 2012

Final part of Android tutorial

Here we are going to take a look at ContentProvider, subclassing of SimpleCursorAdapter, ListView and assembling of all that into almost usable application.

ContentProvider


Most irritating concept of whole Android platform. That is some kind of grand unified interface to access all data publicly available on system. As we may expect from Internet search oriented company, there is Uri which starts with "content://" then we have “authority” and “base path”, like this:

private static final String AUTHORITY = "za.org.droidika.tutorial.SearchResultProvider";
private static final String TWEETS_BASE_PATH = "tweets";
public static final Uri CONTENT_URI = Uri.parse("content://" + AUTHORITY
        + "/" + TWEETS_BASE_PATH);


That is not all, we also differentiate between operations on unit and bulk operations:

public static final int TWEETS = 100;
public static final int TWEET_ID = 110;
public static final String CONTENT_ITEM_TYPE = ContentResolver.CURSOR_ITEM_BASE_TYPE
        + "vnd.org.droidika.tutorial/tweets";
public static final String CONTENT_TYPE = ContentResolver.CURSOR_DIR_BASE_TYPE
        + "vnd.org.droidika.tutorial/tweets";


Further in the source code (available from here https://github.com/FBulovic/grumpyoldprogrammer) we combine our identifiers and load into UriMatcher and we add DB mappings to HashMap. That, with help of SQLiteOpenHelper, is enough of configuration and we can finally implement CRUD methods. I will explain bulk insert and you can find out what is going on in others on you own.

public int bulkInsert(Uri uri, ContentValues[] values) {
    final SQLiteDatabase db = dbInst.getWritableDatabase();
    final int match = sURIMatcher.match(uri);
    switch(match){
    case TWEETS:
        int numInserted= 0;
        db.beginTransaction();
        try {
            SQLiteStatement insert =
                db.compileStatement("insert into " + DbHelper.TABLE_NAME
                        + "(" + DbHelper.USER + "," + DbHelper.DATE
                        + "," + DbHelper.TEXT + ")"
                        +" values " + "(?,?,?)");
            for (ContentValues value : values){
                insert.bindString(1, value.getAsString(DbHelper.USER));
                insert.bindString(2, value.getAsString(DbHelper.DATE));
                insert.bindString(3, value.getAsString(DbHelper.TEXT));
                insert.execute();
            }
            db.setTransactionSuccessful();
            numInserted = values.length;
        } finally {
            db.endTransaction();
        }
        getContext().getContentResolver().notifyChange(uri, null);
        return numInserted;
    default:
        throw new UnsupportedOperationException("unsupported uri: " + uri);
    }
}


We obtain writable instance of DB and we ask URIMatcher do we have right Uri, we do not want to attempt inserting wrong data or do bulk inserting of single row. Next we compile insert statement and open transaction. Inside for loop we assign parameters and execute inserts. If everything vent well we commit transaction and close it inside finally. At the end we ask ContentResolver to notify subscribers about new situation.
We do not pay any attention on resource management, we do not manage cursors, nothing. Suddenly SQLite is friendly and cooperative. That is probably main reason, beside ability to share data across application boundaries, why we should use providers. Naturally there is more, but it happens not in code but inside AndroidManifes.xml
Within application element we place this:

    android:name=".SearchResultProvider"
    android:authorities="za.org.droidika.tutorial.SearchResultProvider" />


Now ContentResolver knows how to find our provider.


Subclassing SimpleCursorAdapter



Since whole idea behind application was implementing something like autocompletion, we type and ListView content changes while we typing, we need custom data adapter. There is only one interesting method to implement:

public Cursor runQuery(CharSequence constraint) {
    String searchString = constraint.toString();
    if (searchString == null || searchString.length() == 0)
        c = contentResolver.query(SearchResultProvider.CONTENT_URI, null, null, null, null);
    else {
        c = contentResolver.query(SearchResultProvider.CONTENT_URI, null, DbHelper.TEXT + " like '%"
                + searchString + "%'", null, null);
    }
    if (c != null) {
        c.moveToFirst();
    }
    return c;
}


If we have search string we do like query and return cursor and if we do not have search string we return everything. Again we do not manage cursor, we just leave everything to Android and it behaves really friendly.

Assembling application


User interface is influenced with SearchableDictionary sample from Android SDK. We type in EditText on top of the screen our search string and data adapter and provider load result into ListView. In order to retrieve data we start “cron job” and it does Twitter search every three minutes and stores data into DB. MainActivity contains example how to create and use menu, how to check is network available and only nontrivial method there is this one:

private void buildList() {
    String[] columns = new String[] { DbHelper.DATE, DbHelper.TEXT,
            DbHelper.USER };
    int[] to = new int[] { R.id.textView2, R.id.textView4, R.id.textView6 };
    Cursor cursor = createCursor();
    final SearchableCursorAdapter dataAdapter = new SearchableCursorAdapter(this, R.layout.list_entry,
            cursor, columns, to);
    ListView listView = (ListView) findViewById(R.id.missingList);
    listView.setAdapter(dataAdapter);
    EditText textFilter = (EditText) findViewById(R.id.myFilter);
    textFilter.addTextChangedListener(new TextWatcher() {

        public void afterTextChanged(Editable s) {
        }

        public void beforeTextChanged(CharSequence s, int start, int count,
                int after) {
        }

        public void onTextChanged(CharSequence s, int start, int before,
                int count) {
            if (dataAdapter != null) {
                    dataAdapter.getFilter().filter(s.toString());
            }
        }
    });
    dataAdapter.setFilterQueryProvider(dataAdapter);
}


We do setup of ListView, create data adapter, assign data adapter and use TextWatcher to run queries against content provider. Not very complicated.
Again, visit repository https://github.com/FBulovic/grumpyoldprogrammer retrieve code and you have quite comprehensive example, written in such way that is easy to understand. If you are going to use it in production configure HttpClient properly, how is that done is described here http://grumpyoldprogrammer.blogspot.com/2012/10/is-it-safe.html

Saturday, November 24, 2012

Install Oracle JDK in LinuxMint Maya or Ubuntu 12.04 LTS

This primarily describes setup required for Android development on 64 bit LinuxMint Maya what is very much the same as Ubuntu 12.04 but with usable window manager. For those two popular distros we have OpenJDK in repository and we can easily install it using apt-get from terminal or GUI Software Manager. But for Android development only Oracle JDK is supported and Android SDK is 32 bit what implies:

sudo apt-get install ia32-libs

Otherwise we will get confusing error message that for example adb was not found and we attempted to run it.
Current version of Oracle JDK can be downloaded from here http://www.oracle.com/technetwork/java/javase/downloads/index.html
For example we select jdk-7u7-linux-x64.tar.gz accept license and download it using Firefox. When download finishes we typically check signature of it and that is done from terminal, so cd to Downloads and run:

$ md5sum jdk-7u7-linux-x64.tar.gz
15f4b80901111f002894c33a3d78124c  jdk-7u7-linux-x64.tar.gz


Here I do Google search on md5 to be sure that I downloaded right archive. Then we unpack archive simply right clicking on it and selecting Extract Here. That creates directoru jdk1.7.0_07 in Downloads. JDK should be in /usr/lib/jvm unless we want to specify execution path every time, for example this is how it looks on my box:

/usr/lib/jvm $ ls -l
total 20
lrwxrwxrwx 1 root root   24 Oct 31 15:39 default-java -> java-1.6.0-openjdk-amd64
lrwxrwxrwx 1 root root   24 Oct 31 15:39 java-1.6.0-openjdk -> java-1.6.0-openjdk-amd64
lrwxrwxrwx 1 root root   20 Oct 31 15:39 java-1.6.0-openjdk-amd64 -> java-6-openjdk-amd64
lrwxrwxrwx 1 root root   24 Oct 31 15:39 java-6-openjdk -> java-1.6.0-openjdk-amd64
drwxr-xr-x 7 root root 4096 Nov  4 00:00 java-6-openjdk-amd64
drwxr-xr-x 3 root root 4096 May  3  2012 java-6-openjdk-common
lrwxrwxrwx 1 root root   24 Nov  1 11:26 java-6-oracle -> /usr/lib/jvm/jdk1.6.0_37
drwxr-xr-x 5 root root 4096 May  3  2012 java-7-openjdk-amd64
lrwxrwxrwx 1 root root   24 Oct 31 16:52 java-7-oracle -> /usr/lib/jvm/jdk1.7.0_07
drwxr-xr-x 8 root root 4096 Nov  1 11:22 jdk1.6.0_37
drwxr-xr-x 8 root root 4096 Aug 29 03:12 jdk1.7.0_07


In order to move jdk1.7.0_07 from downloads we can use

sudo mv jdk1.7.0_07 /usr/lib/jvm/

we are doing that from terminal in Downloads, or maybe start caja or gnome as root and do it from GUI. If we are in GUI we recursively change ownership to root using properties and if we are doing it from terminal:

sudo chown -R root:root /usr/lib/jvm/jdk1.7.0_07

Now we need symlink which we use later to switch between different versions of Java:

sudo ln -s /usr/lib/jvm/jdk1.7.0_07 /usr/lib/jvm/java-7-oracle

now we can install runtime and compiler:

sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/java-7-oracle/jre/bin/java 2
sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/java-7-oracle/bin/javac 1


That allows us to configure runtime and compiler respectively using the following two:

sudo update-alternatives --config java
sudo update-alternatives --config javac


we need simply to type number of desired version and hit enter. To check what we are actually running we can execute:

javac -version
java -version

Friday, November 23, 2012

More about stacking

In the last tutorial we stacked few frames using GIMP after manually aligning them. Since we were setting layers to screen mode it was at the end quite bright. What we could do to make it more natural is to make additional layers transparent. Opacity of bottom layer 100%, next 50%, next 25% and so on.

Automated stacking


Here Windows users have really wide choice of free or paid programs. For example Deep Sky Stacker and RegiStax. If there is dozen or more frames to stack it pays to start KVM or VBox and struggle with Windows for for few minutes in order to use Deep Sky Stacker. There is no real equivalent which will do the same task on Linux, register, align and stack frames using GUI. For smaller number of frames we can use ale - The Anti-Lamenessing Engine written by David Hilvert. Typically it is not compiled with ImageMagick support enabled, so we have to prepare images to be processed an to convert them into ppm format. We can install ImageMagick and use mogrify and convert to convert photos from terminal, but in order to make things easier we will use GUI.
If we are using DSLR camera or some of those new compact cameras we may be able store images in RAW format. It allows us to do significant amount of processing on such image. There is few really good programs for RAW processing, like UFRaw, RawTherapee or Darktable. We are going to use the last one Darktable. After installation we will right click on RAW file and select from context menu Open With Darktable. User interface is non-conventional so here is quick explanation. We want our RAW to be converted to ppm and we want to remove hot pixels.
Proper removal of hot pixels would be taking “dark frames” at the end of the session. That is place cap on lens and take picture using the same ISO value and exposure time as data frames. After that we can add dark to data frame as layer in GIMP and subtract it to remove hot pixels.
When Darktable shows-up we will see in the right pane tab which says more plugins. Clicking on it we open it and select hot pixels plugin. Again clicking on it we close it and under correct tab is hot pixels plugin, which is "switched off". We "switch it on" and it removes hot pixels.
We are happy with all default processing so far. Now we want to export image to ppm. For that we hit key “l” what brigs us to lighttable mode. On the right pane we locate export selected and make it to look like this:

Export could be achieved via export button or keyboard shortcut Ctrl+e. To go back to darkroom mode we hit “d”. We repeat process on all frames. If we added some exposure, EV or maybe two, it is likely that we have generated noise. Also if we do not want full size picture we may want to resize it, ale will finish processing much faster. In GIMP we do Filters -> Enhance ->Wavelet Denoise and here is how it looks after and before denoising:

If we want to resize that is Image -> Scale image, and we export image back to original ppm.
Now we can open terminal and cd to folder with ppms. This is what I did and what was output in terminal


ale --md 64 --dchain fine:box:1 *.ppm stacked.ppm
Output file will be'stacked.ppm'.                                 
Original Frame: 'img_0001_01.ppm'. 
Supplemental Frames: 
'img_0001_02.ppm'***** okay (88.869874% match). 
'img_0001_03.ppm'***** okay (90.897255% match). 
'img_0001.ppm'***** okay (91.984785% match). 
Iterating Irani-Peleg. 
Average match: 90.583972% 


Here is explanation --md Set element minimum dimension x. (100 is default),  --dchain fine:box:1 approximates drizzling. and two remaining params are input and output. To get more info execute ale --hA.
Result was rather cold and dark. In order to bring some warmth and dynamics I opened stacked.ppm in GIMP and did Colors -> Components -> Decompose where from drop-down LAB is desired. Now for each layer I made only one which I am currently working on visible and others invisible, clicking on eye in layers docking window. Then after duplicating layer and setting copy layer to Overlay mode, I merged visible layers accepting default option expand as necessary. That was repeated for L, A and B layer.


After that Colors -> Components -> Compose, again selecting LAB. At the end Colors -> Auto -> White Balance and Edit -> Fade Levels with default Replace mode and Opacity 33. Here is result:

Those are the same four frames from last tutorial, if you do not have own to process, download them, convert them to ppm and you can try ale stacking and post-processing with them.

Wednesday, November 21, 2012

Even More Astrophotography

Everything what was described in previous tutorial should work on Windows as well. Fiji uses Java and works everywhere as advertised and GIMP installer for Windows certainly exists. How to install plugin registry on Windows I really do not know, Google is your friend.

We can go downloading and processing FITS files from many places beside mentioned LCOGT we can use Hubble data from http://hla.stsci.edu/hlaview.html or maybe data from Misti Mountain Observatory http://www.mistisoftware.com/astronomy/index_fits.htm to name few.

All that is nice, but real fun begins when we capture own data using our own camera.

Taking pictures without tracking


Any kind of camera, any kind of lens, capable of delivering sharp pictures will do. There is another inexpensive piece of equipment which is a must - tripod. I am using old Sony A290 DSLR with SAL75300 telephoto lens.
Now we go out place tripod and put camera on it. Select manual mode, adjust ISO to 800, maybe 1600 and exposure long as possible bat not too long to avoid star trailing. Shorter focal length will allow longer exposures. How long exposure can be? That depends on many things, your position on the globe, declination of target and so on. With 75mm focal length on DSLR what corresponds to 112.5mm on 35mm SLR I am happy with 4 to 5 seconds of exposure in Johannesburg, South Africa. If 50mm lens is available I would go for 6 to 8 seconds exposure. So, select exposure, go into drive mode select three or five shots burst, aim and fire.
If you are going to use some stacking software like Deep Sky Stacker you can take RAW and JPEG picture simultaneously. Deep Sky Stacker will not work on Linux and you will have to run Windows inside virtual machine to use it.
When you have few nice snapshots of for example Milky Way you can go back to computer and stack them using GIMP. We will describe stacking in the last part of article.

Manual tracking


Soon as you stack few snapshots appetite will start growing. Going for hundred snapshots is not way forward. Way forward is to increase exposure time. If you do not have $$$$ to spend on real telescope with computerised equatorial mount, you need to look for cost-effective solution. For example to build barn door tracker or you may just have cheap telescope with equatorial mount which is only good for taking Moon snapshots. Cheap telescopes are coming with poor quality equatorial mounts which are very shaky. Now worst thing which you may attempt is spending money to stabilise cheap mount. Attach weight to its tripod, few rounds of rope around legs to tighten it and that’s it. Doesn’t look nice but does job.

Placing camera on telescope could be done using proper piggyback mount or you can made one. I am using ordinary wire ties tightened between camera and quick release head. Don’t go too sloppy you may destroy camera in that way.
Placing eyepiece with higher magnification in and locating bright star close to edge of viewing field you are ready to go. My camera goes up to 30 seconds and after that BULB, for BULB I need remote, so 30 seconds is what I am aiming for. How long exposure can be? It depends of equatorial mount setup, with quick alignment you should be able to pool one minute. Tolerance for your tracking errors is again function of focal length, 300mm is likely just waste of time with maybe one  good frame out dozen.

Stacking frames in GIMP


I uploaded four re-sized frames of area around M 8 if you want to practice before you take own snapshots. Here are links:

https://docs.google.com/open?id=0B0cIChfVrJ7WbkJtZjg3LWlGbXM
https://docs.google.com/open?id=0B0cIChfVrJ7WSWF1LUZ5UnBVRTg
https://docs.google.com/open?id=0B0cIChfVrJ7WMzdnRXdmZmJOSDQ
https://docs.google.com/open?id=0B0cIChfVrJ7WNFplbk5kb2ozYUU

We open the first frame and possibly reduce noise, if required, as in previous tutorial.
If you are using my frames, which are re-sized, there is no need for noise reduction or hot pixels removal. Hot pixels removal will remove quite few stars on re-sized image.
Since those are longer exposure captures we will have hot pixels. To eliminate hot pixels we open Filters -> G’MIC -> Enhancement -> Hot pixels filtering and apply it with default values. Now we open as layer next frame, select it in layers and in Filters we Repeat “G’MIC”. Now we set mode from Normal to Screen, zoom to 100% or more and align layers. We repeat the same for remaining frames. At the end we Merge Visible Layers from context menu for layers (right click one) accepting default expand as necessary option. If picture is too bright what would be a case with supplied pictures, we will do contrast stretching. As we add frames we increase level of signal and histogram changes like this:


So, Colors -> Auto -> White Balance and after that Edit -> Fade Levels where we set mode to Multiply. This is what final result should look like:

That would be simple manual stacking of JPEG frames with satisfactory final result. We could also align color levels on this picture but that was not goal of this tutorial.

Sunday, November 18, 2012

GIMP, Fiji and astrophotography on Linux


Ever wanted to know how to process those wonderful Hubble like pictures on Linux? It is not difficult, I will show you how. We will do RGB processing and LRGB is very similar.
On the Web we will encounter plenty tutorials where author uses some proprietary tool to do contrast stretching and after that Photo Shop to do final processing. Usually none of those will be available on Linux. Replacement for Photo Shop is no brainer, it is naturally popular GIMP. Tool for contrast stretching is trickier. For nonlinear stretching I am using Fiji, what is almost the same as ImageJ. Guess there is no distribution which doesn’t offer GIMP so just follow usual install path for your distro. It is good idea to install gimp-plugin-registry as well. For Fiji you will have to download it from http://fiji.sc/wiki/index.php/Downloads and untar it. It comes with or without Java runtime, so we pick what we need depending do we already have Java or not.
Now we installed required programs and we need data. Typically data consists of three or more gray images in FITS format. FITS is abbreviation for Flexible Image Transport System. Good source of FITS data is Las Cumbres Observatory Global Telescope Network and here is their website http://lcogt.net/
They have two two meter reflectors and few smaller telescopes and observation data is freely available under Creative Commons license. If you get data from 2m telescope you will end-up with three 2008x2008 pixels images about eight megabytes each. So go there and pick some galaxy from Observations section, I will go for NGC6946, looks nice. After downloading Blue, Green and Red FITS we can start.

And we immediately see why we need contrast stretching, there is barely few stars. Now from menu we do Process -> Enhance Contrast, tick equalize histogram and hit OK button. Result looks like this:

There is much more to see but also huge amount of noise. We repeat the same story for remaining files and save them as Tiff. If we want we can go to Image -> Color -> Merge Channels and create composite to see how approximately it will look like.

It is nice but too much noise, time for GIMP.
We open all three tifs in GIMP and we do Filters -> Enhance ->Wavelet Denoise with settings like on picture. If you don't have gimp-plugin-registry installed there will be no Wavelet Denoise then just do Despeckle few times.

We do the same on remaining two pictures doing Ctrl+F repeating last filter. Next step is Image -> Mode -> RGB followed by Colors -> Colorify and we apply what is actual color on it.
Now we copy green and paste as layer over red, rename Pasted Layer to something and change layer mode to Screen, we do the same with blue one.

If alignment is OK we can merge layers.
Now we can play with curves, levels or do decomposition to enhance colors and so on, get imaginative here. Here is how it looks like without any additional processing.

If frames are properly aligned we could place them as layers into single image and do Colors -> Components -> Compose what is simpler than doing Mode and Colorify.
If we have LRGB, then we process RGB as above. L we stretch, denoise and at the end use it as value layer and RGB as color layer.

Saturday, October 27, 2012

Tutorial part 3

In this part we are going to take a look at AsyncTask which is light-weight framework intended to save programmers from concurrency traps and handling of Java threads.

AsyncTask

Most important thing is AsyncTask is intended to be used on UI thread, so do not try to use it elsewhere. Signature is android.os.AsyncTask and you can use them or ignore them and replace unused ones with Void. If you are planning to pass array of strings as parameters and not use progress or result than it becomes android.os.AsyncTask.
There are four commonly used and overridden methods:

protected void onPreExecute ()
protected abstract Result doInBackground (Params... params)
protected void onProgressUpdate (Progress... values)
protected void onPostExecute (Result result)


obviously only doInBacground is compulsory to override and that is only one which is not executing on UI thread. Typical implementation looks like this:

private class FetchItemsTask extends AsyncTask {
    @Override
    protected void onPreExecute() {
        super.onPreExecute();
        progressDialog.setMessage("Downloading. Please wait...");
        progressDialog.show();
    }
    @Override
    protected Void doInBackground(Void... params) {
        do some heavy I/O work here
        return null;
    }
    @Override
    protected void onPostExecute(Void result) {
        pass to UI result of work here
        progressDialog.dismiss();
    }
}


Usually it is declared as inner class inside Activity and we have instance of ProgressDialog accessible.
It is not full size threading framework, it should be used as suggested to do lengthy I/O operations. That lengthy is few seconds not few minutes. So good candidate for downloading image from Internet or pulling data from DB. Very good idea here is to go to your Android SDK Manager and to download source code and see how implementation of AsyncTask looks like. Equally good idea is to place logging at every method and log ID of current thread, so that we can be sure where is what executed.
If we take a look at my example in git://github.com/FBulovic/grumpyoldprogrammer.git we will see that my BroadcastReceiver contains  AsyncTask. Knowing that  AsyncTask can bi instantiated only from UI thread we can conclude that  BroadcastReceiver is also executed on UI thread. That was nice intro to CursorAdapter and ContentProvider where execution happens outside of UI thread but they need to communicate with UI thread and that may course some problems here and there. Especially you want to access SQLite DB directly without using ContentProvider.
Next part of this tutorial will follow, soon.

Thursday, October 25, 2012

Tutorial part 2

Since we have scheduler to retrieve data maybe we want to put data into database. On Android we have SQLite as part of environment so choice is easy.

Subclassing SQLiteOpenHelper

Things are straightforward here. There are two methods which must be implemented:

@Override
public void onCreate(SQLiteDatabase arg0) {
    arg0.execSQL(createSQL);
}
@Override
public void onUpgrade(SQLiteDatabase arg0, int arg1, int arg2) {
    arg0.execSQL("DROP TABLE IF EXISTS "+TABLE_NAME);
    onCreate(arg0);
}


variable createSQL is String containing simple SQL statement to create table:

CREATE TABLE tweets (_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, 
    user_name TEXT NOT NULL, date_name TEXT NOT NULL, 
    text_name TEXT NOT NULL)


Self-incrementing primary key _id is required by Android, to use it in data binding, must be named like this.
Another thing which is required is constructor:

public DbHelper(Context context) {
    super(context, DATABASE_NAME, null, 1);
}


Beside context we need to pass to super-class constructor DB name, optional CursorFactory and version.
Using helper looks like this:

DbHelper dbW = new DbHelper(this);
SQLiteDatabase db = dbW.getReadableDatabase();
Cursor cursor = db.query("tweets", new String[] { "_id", DbHelper.DATE,
        DbHelper.TEXT, DbHelper.USER }, null, null, null, null, null,
        null);
cursor.moveToFirst();
...
do something with cursor
...
cursor.close();


It is obviously called DbHelper, since query needs to be done readable database is enough. Query method looks odd and since we are selecting everything we just provide table name and collection of columns. Once when work with cursor is done we need to close it, SQLite is quite tough on resource managemet. Other parameters of query method are selection, order, limit, having, group by. If we want to specify selection it will be something like this:

cursor = db.query("route", new String[] { "_id",
        DbHelper.ADDRESS, DbHelper.PHONE,
        DbHelper.SURNAME }, DbHelper.STATUS + "="
        + Integer.toString(filterEquals) + " and "
        + DbHelper.ADDRESS + " like '%" + searchString + "%'",
        null, null, null, null, null);


So selection is “where”.
Instead of this pseudo OO approach raw SQL can be used:

String updateSQL = "UPDATE my_table SET complete = 4  WHERE status > 0;";
dbW.getWritableDatabase().execSQL(updateSQL);


It is certainly easier at the beginning to use normal SQL – less unknown stuff to fight with.
If we are take a look at code for this (and other) tutorial, we see that I never used directly SQL except inside provider. If you use it directly be prepared for really strict resource management and Android likes executing things on different threads what doesn't go nicely with SQLite which doesn't let you close cursors created on different tread from yours. Cure for this is to call startManagingCursor and leave to Android to close dangling cursor. Do not be discouraged with deprecation of startManagingCursor if you can't control execution it is much better than leaking resources. All that nightmare suddenly stops if you do not use SQLite DB directly but wrapped up in provider. Political approach, you are free to do what you want but we will make you use what we want you to use.
Code for this tutorial (and others) is here git://github.com/FBulovic/grumpyoldprogrammer.git
Next part of this tutorial will follow.

Tuesday, October 23, 2012

Android tutorial (multipart)

This tutorial will cover AsyncTask, SQLite, SQLiteOpenHelper, ContentProvider, AlarmManager, BroadcastReceiver, custom filterable CursorAdapter, ListView, TextWatcher, JSONArray and JSONObject.
We will actually use HttpClient but I was talking about it already. Goal of tutorial is to fetch periodically search results from Twitter, store them in DB and display them in list view. We will make those search results searchable.

Cron job Android style

Android is Linux but Cron daemon is missing, we must use  AlarmManager and  BroadcastReceiver. Simplest form of  BroadcastReceiver looks like this:

public class CronJobScheduler extends BroadcastReceiver {
    @Override
    public void onReceive(Context context, Intent intent) {
        System.out.println("inside onReceive");
    }
}


To use it from your Activity you will do something like this:

Context context = getBaseContext();
AlarmManager manager=(AlarmManager)context.getSystemService(Context.ALARM_SERVICE);
Intent intent = new Intent(context, CronJobScheduler.class);
PendingIntent pending = PendingIntent.getBroadcast(context, 0, intent, 0);
manager.setRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis(), 1000 * 20 , pending);


and at appropriate place you will also stop scheduled job, like this:

Context context = getBaseContext();
Intent intent = new Intent(context, CronJobScheduler.class);
PendingIntent pending = PendingIntent.getBroadcast(context, 0, intent, 0);
AlarmManager alarmManager = (AlarmManager) context.getSystemService(Context.ALARM_SERVICE);
alarmManager.cancel(pending);


But that is not all, in AndroidManifest.xml we must place this:



It must be within application element. If we build app and run it in emulator it will print every 20 seconds inside onReceive.
In order to do something useful (like download JSON, parse it and insert data in DB) we will use later this code:

PowerManager manager = (PowerManager) context.getSystemService(Context.POWER_SERVICE);
PowerManager.WakeLock lock = manager.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "HOWZIT:)");
lock.acquire();
cr = context.getContentResolver();
new FetchItemsTask().execute();
lock.release();


In order to have this executed we will need to add some permissions to  AndroidManifest.xml, like this:



We are acquiring PowerManager.PARTIAL_WAKE_LOCK and that is CPU lock, we want CPU to be awake so that scheduled work could be done. As usual it is good idea to release CPU lock when work is done ;-)
If anybody needs code for this it is available here git://github.com/FBulovic/grumpyoldprogrammer.git there is few other things, also.
Next part of this tutorial will follow, soon.

Is it safe?

Looking at tech headlines I spotted this:

Over 1,000 Android Apps Contain SSL Flaws by Tom Brewster

If they are claiming existence of some virus or Trojan which works on Android, I would just ignore it as complete nonsense. Now after initial “this is FUD” reaction I proceeded reading and encountered the following claims:

More than 1,000 legitimate Android apps contain SSL (Secure Sockets Layer) weaknesses, leaving them vulnerable to Man-in-the-Middle (MITM) attacks, researchers have claimed.

Without naming names, the students found one “generic online banking app”, which was trusting all certificates, even the MITM proxy with a self-signed certificate set up by the researchers. It had between 100,000 and 500,000 users.


Not naming names is really nice touch, maybe more unsuspecting users can fall victim to MITM criminals.
Knowing that software companies lately relying on do idiotic test recruitment process, I started to change opinion from trolling-FUD to it is actually possible. In pursuit of margins, software companies are inclined to employ younger, inexperienced and cheaper candidates, but in the same time they will force them to work faster and be productive as more experienced programmers. At the end it hurts them (couldn't care less for them), final users and whole development platform.
Naturally FUD ingredient can't be completely omitted, is situation with security implementation on proprietary mobile operating systems better? We can mess-up SSL configuration on every kind of OS using the same HttpComponents.
So, problem is not in platform, it is difficult to imagine something safer than Linux (Android uses Linux kernel), but in incompetency of would be programmers and their would be project managers. All those C# .NET Windows developers suddenly are becoming experts for Linux and Android – crazy stuff.

Apache HttpComponents and  HttpClient

Android uses 4.x version of HttpClient and that is old and reputable library. Problem is that in pursuit of deadlines people jump on Google search and finish with some half-baked solution from “Stack Overflow”. “Stack Overflow” can not replace help files and it is not its purpose. Read the manual. If you are not sure what help file says, download source code, yes that is open source you can do it, see how they are using that class in examples and tests. That is faster than reading threads on different forums.
It is possible to configure HttpClient , something like this:

private static final DefaultHttpClient client;
private static HttpParams params = new BasicHttpParams();
static {

    HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1);
    HttpProtocolParams.setContentCharset(params, "utf-8");
    params.setBooleanParameter("http.protocol.expect-continue", false);
    SchemeRegistry registry = new SchemeRegistry();
    registry.register(new Scheme("http", PlainSocketFactory
            .getSocketFactory(), 80));
    final SSLSocketFactory sslSocketFactory = SSLSocketFactory
            .getSocketFactory();
    sslSocketFactory
            .setHostnameVerifier(SSLSocketFactory.BROWSER_COMPATIBLE_HOSTNAME_VERIFIER);
    registry.register(new Scheme("https", sslSocketFactory, 443));
    ClientConnectionManager manager = new ThreadSafeClientConnManager(
            params, registry);
    ConnManagerParams.setMaxTotalConnections(params, 20);
    ConnManagerParams.setMaxConnectionsPerRoute(params,
            new ConnPerRoute() {
                public int getMaxForRoute(HttpRoute httproute) {
                    return 10;
                }
            });
    client = new DefaultHttpClient(manager, params);
}


It doesn't have to be used in just give me defaults style. We need thread safe connection manager since we are going to use threads to check later is singleton thread safe.
If one does search on “How to ignore SSL certificate errors in Apache HttpClient 4.0”, there is nice example how not to configure HttpClient. May be used in early development stage, but releasing things like that is asking for lawsuit.
Once client is configured, one may want to provide it in organized manner to the rest of application, singleton looks like ideal storage for it.

Lazy Initialization Holder Class Idiom

This one is quite popular in Java world since appearance of book Java Concurrency in Practice by Goetz and others. It relies on Java Language Specification and JVM loading. Here is how it aproximately looks like:

public class SafeLazy {
    private static class SingletonHolder {
        private static final DefaultHttpClient client = new DefaultHttpClient();
    }

    public static DefaultHttpClient getInstance() {
        return SingletonHolder.client;
    }
}


Since outer class doesn't have static fields it will be loaded without initializing HttpClient and SingletonHolder. JVM will perform initialization of inner class only when it needs to be accessed. Naturally one may want to configure slightly that client, like in the previous code example.
Now you can avoid problems of type “if I try successive downloads it just hangs”, “I trust every certificate from everywhere” and so on.
If you need complete sample with test it is available here https://github.com/FBulovic/isitsafe.git you know how to use Git, don't you?

Wednesday, June 13, 2012

Who you follow but he or she does not follow you back on Twitter

This tutorial primarily targets DIY inclined nonprogrammers or very lazy programmers. In order to do professional interaction with Twitter one would use their API which is quite well documented and development portal is available here https://dev.twitter.com/
For DIY approach we are happy to pull HTML, page source and work with it. I use Firefox and add-on Firebug by Joe Hewitt. If you do not have Firebug between your add-ons - install it.  If you are using Google Chrome Cedric created similar tutorial here http://www.cedricve.me/2012/04/10/howto-scrape-your-twitter-followers-with-perl/ though code and goal is somewhat different.
Now login into your Twitter account and expand all your followers to the bottom of the page. If we save page now there will be no required data, it is loaded via Java script how we scrolling and expanding list. To see data we invoke Firebug via context menu, right click on the page and select "Inspect Element with Firebug".


In search box (right from context menu on the picture) we type "stream-items-id" and hit enter. Selecting whole "div" we right click and copy inner HTML. Paste that in your favorite text editor and save, people using Windows should use Notepad++. Do the same for list following. I saved followers into file called fm and following into ft, if you like different naming scheme, please rename files in code accordingly.
In directory where fm and ft are saved create Perl script with following content:


when you run it, terminal or command line or from IDE it should produce output like this:

Queen_Europe - 407347022
FaulkesSouth - 102351359
lcogt - 80797776
glenn_hughes - 15565190
FaulkesNorth - 102350867
Rogozin - 36980694
wikileaks - 16589206
Nigel_Farage - 19017675
StoppINDECT – 169043724

those are screen names and user ID pairs.