Monday, February 25, 2013

C++ and pthreads on Linux

If one wants to use NDK and that is JNI on Android, sooner or later he will run into threading on Linux, yes Android is Linux. Pthreads is short for POSIX threads where POSIX is Portable Operating System Interface. Since they are portable they will work on Unix, Linux, FreeBSD, Mac and so on. Pthread is defined in pthread.h and represents set of C functions, structs and defs, to discover where it is you can execute from terminal:

$ locate pthread.h

If you have NDK installed you will see that pthread.h is part of it and that arch-arm version is significantly smaller than /usr/include/pthread.h one. To cut long story short we can do Hello World and see how it looks like:


It is quite simple, we declare thread function, thread itself and then we create thread and pass function pointer and argument in pthread_create. Later we join and exit. To compile it we do:

g++ -pthread simple.cpp -o simple

That -pthread switch is linking instruction. Now cout is not thread safe and we may want to use mutex to make printing to standard output safe.


Now one can go and wrap pthread or mutex in class and make it easier to use. Only we need to be careful with that function pointer in pthread_create, that is C function and passing class method won’t work without some additional coding and static casting.
Finally we will take a look at pthread_key_create, which is recommended by Android documentation. Its signature is:

int pthread_key_create(pthread_key_t *key, void (*destructor)(void *));

That "destructor" should free resources taken by key when thread exits. Key itself is global variable but on thread level, sort of locally global. So here is example:


When you build it and execute it you will see that cout is not thread safe. If you want to wrap pthread in class, very good reading is Implementation and Usage of a Thread Pool based on POSIX Threads by Ronald Kriemann, sorry do not have link for it. Beside mutex, there are are condition variables and semaphores to help with concurrency issues. If you are going to fork process you may want to use System V semaphores instead of pthread semaphores. Finally for any serious implementation of pthread_key_create you need to look at pthread_once_t.

Thursday, February 21, 2013

More C++ and JNI on Linux Mint

In last JNI tutorial we learned how to compile Java code, generate header and implement and compile native code. JNI declares two main groups of commonly used types. Primitive types, which are int, boolean, float and so on, and they are user friendly, no need to do cast to native types. Other group are reference types, like String, arrays or proper Java classes, and they all inherit from jobject and require special handling to map them to native types. So here is the simple example to illustrate different parameter type handling:

public class MultiCall{
    public native int add(int a, int b);
    public native boolean isOdd(long a);
    public native int sum(int[] arr);
    public native String add(String a, String b);
    static
    {
        System.loadLibrary("MultiCall");
    }
    public static void main(String[] args)
    {
        MultiCall m = new MultiCall();
        System.out.println(m.add(1, 2));
        System.out.println(m.add("Hello", "World"));
        System.out.println(m.sum(new int[]{1,2,3,4}));
        System.out.println(m.isOdd(6L));
    }
}


After we compile it and run javah tool on resulting class, we have the following method definitions:


Implementation looks like this:


Reference types require significant work to use them from native code. Also if we want to return reference type it needs to be constructed. I am using std::string because it will do deep copy of character array, at least if you are using gcc/g++, and allows me to easily manipulate strings. Code is easy to understand and doesn’t deserve further comments.
The second example is illustration how to throw exception from native code and catch it in Java code.

public class NativeException{
    public native int divide(int a, int b) throws IllegalArgumentException;
    static
    {
        System.loadLibrary("NativeException");
    }
    public static void main(String[] args)
    {
        NativeException n = new NativeException();
        try{
            System.out.println(n.divide(12, 3));
            n.divide(12, 0);
        }catch(Exception e){
            System.out.println(e.getMessage());
        }
    }
}


Even if we marked native method with throws, javac will not complain if we try to call it out of try-catch block. One of reasons why pure Java is preferred to JNI. Also in header file, that throws is just ignored.


In implementation we do sanity check and if second parameter is zero we try to throw IllegalArgumentException.


If env fails to find IllegalArgumentException, we just return zero.

Tuesday, February 19, 2013

Simple Moon processing workflow with Open Source tools

About 50 pictures are taken through small 4.5 inch Newtonian using T-adapter. Since back-focus on this small scope is too short, Barlow x2 is used, what rises focal ratio from f8 to f16. Out of those 50 there is one or two where turbulence is not bad. After transfer of images from camera to hard drive, Rawstudio is used to preview them and find better ones. For conversion from RAW to TIFF I am using Darktable. Here you can see what processing is done in Darktable, what modules are active:



Exported TIFF is then imported in GIMP and GMIC is activated from Filters menu. We will stay in GMIC until end with exception when we are copying layers. From Colors, Tone mapping is performed, using default parameters, and we will work further with result of that transformation.



While it looks better I am not happy with amount of detail, so from the same group with default parameters Local normalization is performed and output mode is set to new image.



Again starting from tone-mapped image, using default parameters Local contrast enhancement from Enhancement group is applied.



At this moment one can decide to use those three image to create pseudo HDR using exposure blend, but I didn’t like flat border area in output, caused by local contrast enhancement. Now I copy-paste local contrast enhanced over local contrast normalized and in GMIC, using input layers all, execute Average blending in Layers group, that is the first one with [standard]. Over result of averaging I copy-paste tone-mapped image, one we started with and do again averaging. This is final result:


Saturday, February 16, 2013

Maximal exposure from tripod without star trails

For a while I was trying to find out what is origin of famous 600 rule without success. Here is what I learned so far. That would be rule applicable to picture taken from tripod, without tracking. That rule says that maximal exposure time for star with declination of 0 degrees is equal 600 divided with focal length in seconds. Here we are talking about 35mm film, so for DSLR camera we need to multiply focal length with crop factor, 1.5 for Nikon and 1.6 for Canon. Declination 0 degrees is celestial equator. Further, rule talks about 0.1mm at 254mm, reading distance, so I assume that image is printed on A4 paper. Since 35mm film went to history that 0.1 mm somehow is mapped to 8 pixels for DSLR camera. Is 8 pixels acceptable or not is another issue. There is even formula:

Maximal exposure = sidereal day * acceptable trail * pixel size /(focal length*2*π*cos(declination))

The sidereal day is 23 h 56 m 4.09 s, acceptable trail is 8 pixels, pixel size for Canon EOS 600D/Ti3 is 0.00429 mm, focal length is 80 mm (50 mm * 1.6), cos(0) is 1. All values must be expressed in the same units. When we convert sidereal day into seconds we have 86164.09 s. After substituting values into equation we got result 5.883066121 s. Some sensor data is available on http://www.sensorgen.info/, crop factor for Nikon and Sony is 1.5, what is acceptable trail for you I do not know, feel free to use it instead of suggested 8 pixels. To find out what is declination of some star, install Stellarium, ignore negative declination on southern hemisphere and use absolute value.
Trigonometric functions are available in any calculator on any OS, just switch into scientific mode. How that cosine influences result? For Orion we can neglect it, but if we want to take picture of Southern Cross about 60 degrees declination our maximal exposure doubles. For declination of 90 degrees we have singular point where exposure grows to infinity, if there is a star at celestial pole we can take whole night exposure - it is not moving. Closer to celestial pole you are, longer exposures are acceptable.

Subclassing Spinner and ArrayAdapter to change drop-down appearance

As we know, applying different layout can change text color and size, background color and so on. There is no need to do any Java code to achieve that, we just use existing Android classes. Now if we want to change drop-down item and include there more controls than just TextView and one drawable or maybe to apply custom resizing, than we have to dive into ArrayAdapter source and write custom adapter. I will not bother you with image and two TextViews, there is plenty of tutorials on Web about that, instead I will just try to resize drop-down to keep it simple. So here is complete code for minimalistic custom adapter:

public class CustomAdapter extends ArrayAdapter implements SpinnerAdapter{
    private LayoutInflater mInflater;
    private int mFieldId = 0;
    private int mResource;
    public CustomAdapter(Context context, int textViewResourceId,
            String[] objects) {
        super(context, textViewResourceId, objects);
        mInflater = (LayoutInflater)context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
    }
    @Override
    public View getDropDownView(int position, View convertView, ViewGroup parent) {
        return createViewFromResource(position, convertView, parent, mResource);
    }
    @Override
    public void setDropDownViewResource(int resource) {
        this.mResource = resource;
    }
    private View createViewFromResource(int position, View convertView, ViewGroup parent,
            int resource) {
        View view;
        TextView text;
        if (convertView == null) {
            view = mInflater.inflate(resource, parent, false);
        } else {
            view = convertView;
        }
        try {
            if (mFieldId  == 0) {
                //  If no custom field is assigned, assume the whole resource is a TextView
                text = (TextView) view;
            } else {
                //  Otherwise, find the TextView field within the layout
                text = (TextView) view.findViewById(mFieldId);
            }
        } catch (ClassCastException e) {
            Log.e("ArrayAdapter", "You must supply a resource ID for a TextView");
            throw new IllegalStateException(
                    "ArrayAdapter requires the resource ID to be a TextView", e);
        }
        String item = getItem(position);
        if (item instanceof CharSequence) {
            text.setText((CharSequence)item);
        } else {
            text.setText(item.toString());
        }
        view.setLayoutParams(new AbsListView.LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT));
        return view;
    }
}


Most of createViewFromResource is Android code from ArrayAdapter. My modest contribution is only red line. If I specify there some values for width, instead of LayoutParams.FILL_PARENT then I will achieve this:


Well, items are narrower but container is still the same, so subclassing ArrayAdapter doesn’t really help. What needs to be resized and repositioned is AlertDialog which is holding those rows. We can find that out when we open Spinner source. Now I will again do minimalistic subclassing of Spinner.

public class CustomSpinner extends Spinner {
    private AlertDialog mPopup;
    public CustomSpinner(Context context) {
        super(context);
    }
    public CustomSpinner(Context context, AttributeSet attrs) {
        super(context, attrs);
    }
    @Override
    protected void onDetachedFromWindow() {
        super.onDetachedFromWindow();
        if (mPopup != null && mPopup.isShowing()) {
            mPopup.dismiss();
            mPopup = null;
        }
    }
    @Override
    public boolean performClick() {
        Context context = getContext();

        AlertDialog.Builder builder = new AlertDialog.Builder(context);
        CharSequence prompt = getPrompt();
        if (prompt != null) {
            builder.setTitle(prompt);
        }
        mPopup = builder.setSingleChoiceItems(
                new DropDownAdapter(getAdapter()), getSelectedItemPosition(),
                this).show();
        WindowManager.LayoutParams layout = mPopup.getWindow().getAttributes();
        layout.x = -128;
        layout.y = -110;
        layout.height = 320;
        layout.width = 240;
        mPopup.getWindow().setAttributes(layout);
        return true;
    }
    @Override
    public void onClick(DialogInterface dialog, int which) {
        setSelection(which);
        dialog.dismiss();
        mPopup = null;
    }
    /*
     * here you copy and paste code for DropDownAdapter from Spinner
     */
}


Hardcoded values are good enough for illustration of subclassing. Now since we are using custom control we need to change tag in layout:


Now if we build and run application in emulator we will get this:



Not very complicated.

Thursday, February 14, 2013

JNI Hello World on Linux Mint

I wrote similar tutorial for www.linux.com about four years ago. The motivation is the same as then, in then Sun and today Oracle documentation only Solaris and Windows are treated, Linux is omitted. Also there are slight differences in compilation process on 32 bit Ubuntu 9.04 then and 64 bit Linux Mint 13 today. Otherwise there is increase in popularity of JNI thanks to Android NDK. To do native Android development one should be familiar with C, C++, JNI and Java. As in that old article I will use C++ to create shared library.
We start with Java class:

class YAHelloWorld
{
    public native void sayHi(String name);
    static
    {
        System.loadLibrary("YAHelloWorld");
    }
    public static void main(String[] args)
    {
        YAHelloWorld h = new YAHelloWorld();
        h.sayHi("World");
    }
}


Beside normal Java stuff we have request to load native library and forward declaration of native method. We save it as YAHelloWorld.java and compile it

javac YAHelloWorld.java

After that we invoke javah to create native header:

/usr/lib/jvm/java-7-oracle/bin/javah -jni YAHelloWorld

I was lazy to export path, so that is a reason for full path to javah. If you try without full path you may get following suggestion:


what is not really necessary.  Content of generated YAHelloWorld.h is:


The first parameter, JNIEnv * is pointer to array of pointers of JNI functions, jobject is sort of this pointer and jstring is Java String. Now when we know all that we can write C++ implementation:


Beside normal C++ stuff we convert Java String to C++ string at the beginning and at the end we release both of them. Compilation looks like this:

g++ -shared -fPIC -I/usr/lib/jvm/java-7-oracle/include -I/usr/lib/jvm/java-7-oracle/include/linux YAHelloWorld.cpp -o libYAHelloWorld.so

and finally we execute our Hello World example like this:

java -Djava.library.path=. YAHelloWorld

That should produce familiar Hello World output. Not too difficult.

Customizing Spinner appearance

If you want to change appearance of Button or TextView, it is very easy. You open layout subfolder in res folder, then you open activity_main.xml and add the following lines:

android:gravity="center"
android:background="#000000"
android:textColor="#ff0000"
android:textSize="32sp"


Something like that, Button text is centered anyway and TextView background is see through by default, so not all of them may be required. But if you want to change appearance of Spinner things are becoming complicated. Naturally first solution is do Google search and see how others are doing it. That gets you to Stack Overflow and there is plenty of solutions which are based on enthusiasm of contributor to say something. That eloquent guessing, without any understanding what is actually going on, goes that far that they are claiming that you must subclass ArrayAdapter to change text color. To be honest Android documentation is not very helpful here and according to modern scientific approach people go here agile, they do trial and error in plain English. If you do not have knowledge demonstrate effort. Naturally there is much better way. Android is open source project and getting Java code for it is not difficult, in Android SDK Manager one just needs to select Sources for Android SDK and download it. Now if we take a look at typical Spinner usage we see something like this:

protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    Spinner spinner=(Spinner)findViewById(R.id.spinner1);
    spinner.setOnItemSelectedListener(this);
    adapter=new ArrayAdapter(this, android.R.layout.simple_spinner_item, items);
    adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
    spinner.setAdapter(adapter);
}


Appearance of Spinner is defined by android.R.layout.simple_spinner_item in collapsed state and android.R.layout.simple_spinner_dropdown_item in expanded state. If we want Spinner to look differently, we need to change layout. Unfortunately android.R.layout.simple_spinner_item is not part of Sources for Android SDK but android.widget.ArrayAdapter is available. We open it and see relevant methods:

public View getView(int position, View convertView, ViewGroup parent) {
    return createViewFromResource(position, convertView, parent, mResource);
}
public View getDropDownView(int position, View convertView, ViewGroup parent) {
    return createViewFromResource(position, convertView, parent, mDropDownResource);
}


Then we look for createViewFromResource, where real work is done:

private View createViewFromResource(int position, View convertView, ViewGroup parent,
        int resource) {
    View view;
    TextView text;

    if (convertView == null) {
        view = mInflater.inflate(resource, parent, false);
    } else {
        view = convertView;
    }
    try {
        if (mFieldId == 0) {
            //  If no custom field is assigned, assume the whole resource is a TextView
            text = (TextView) view;
        } else {
            //  Otherwise, find the TextView field within the layout
            text = (TextView) view.findViewById(mFieldId);
        }
    } catch (ClassCastException e) {
        Log.e("ArrayAdapter", "You must supply a resource ID for a TextView");
        throw new IllegalStateException(
                "ArrayAdapter requires the resource ID to be a TextView", e);
    }
    T item = getItem(position);
    if (item instanceof CharSequence) {
        text.setText((CharSequence)item);
    } else {
        text.setText(item.toString());
    }
    return view;
}

And there we find required information, that printed with red font. We need to supply layout which is TextView or layout containing TextView and to supply ID of TextView fot the second case. Now when we got idea what we are doing changing appearance is trivial.
Collapsed appearance layout:


Expanded appearance layout:


We save those two as custom Spinner layouts. We know that ID can be omitted and we happily omit it, less typing. We use our custom layouts like this:

protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    Spinner spinner=(Spinner)findViewById(R.id.spinner1);
    spinner.setOnItemSelectedListener(this);
    adapter=new ArrayAdapter
(this, R.layout.col, items);
    adapter.setDropDownViewResource(R.layout.exp);
    spinner.setAdapter(adapter);
}


Obviously I saved them as col.xml and exp.xml.

Wednesday, February 13, 2013

Talking Exposure Timer

Is simple talking count-down timer for Android. As title says purpose of it is to help taking pictures where long exposure is required and one can’t look at watch, for example in astrophotography. That is also main reason why user interface is black and red. So, according to my intentions primary users should be people who are doing astrophotography and manual tracking or barn door tracking.
User interface is simple and we got frequency of announcements and duration spinners at the top, start and cancel buttons in the middle and info text at the bottom. If text to speech engine fails to initialize, start button is disabled and appropriate info is displayed. If text to speech engine initializes it will use default language for phone. Cancel is not immediate but is applied on first delayed post, during next announcement of remaining time.
To use it select how frequently you want announcements and how long exposure should be, after delay of 10 seconds, common for DSLR cameras in remote shutter BULB mode, it starts talking and counting down. While count down lasts, application obtains wake lock, screen will go dim but will not go to “sleep”. It is compiled for and tested on Android 2.2 device LG-P500.
Signed APK is available for download from here https://docs.google.com/file/d/0B0cIChfVrJ7WbkhISE5ZdGdSdmM/edit?usp=sharing, download, copy to SD card and open in file manager to install.
Eclipse project is here https://docs.google.com/file/d/0B0cIChfVrJ7WaS1oRl9lTUJUNmc/edit?usp=sharing, download, import in Eclipse and build.


Tuesday, February 12, 2013

Spinner and ArrayAdapter

This one is simplest possible tutorial, but I need Spinner to allow user to set time interval and frequency for countdown timer. So lets explain it, maybe somebody doesn’t know how to use it.
Spinner is control which corresponds to drop-down, like HTML select or Java/Swing JComboBox. When user selects it, it expands, pops-up list view where selection could be made. After selection is made list view disappears and selected item’s value is on collapsed control. Data binding and selection events are handled by ArrayAdapter and AdapterView. In Android SDK is example how to load ArrayAdapter from resource and I will load it from array of strings. Here is complete code:

public class MainActivity extends Activity implements AdapterView.OnItemSelectedListener {
    private static final String[] items={"5", "10", "15", "20", "30"};
    private ArrayAdapter adapter;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        Spinner spinner=(Spinner)findViewById(R.id.spinner1);
        spinner.setOnItemSelectedListener(this);
        adapter=new ArrayAdapter(this, android.R.layout.simple_spinner_item, items);
        adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
        spinner.setAdapter(adapter);
        spinner.setSelection(0);
    }
    @Override
    public void onItemSelected(AdapterView parent, View view, int position, long id) {
        Toast.makeText(this, adapter.getItem(position) + " is selected", Toast.LENGTH_LONG).show();
    }
    @Override
    public void onNothingSelected(AdapterView parent) {
        Toast.makeText(this, "Nothing selected", Toast.LENGTH_SHORT).show();
    }
}


Add to default imports AdapterView, ArrayAdapter, Spinner and Toast. In layout, activity_main.xml I placed one Spinner and accepted default name for it. AdapterView.OnItemSelectedListener is used to implement listeners inside activity and again do it differently from SDK example. Difference between android.R.layout.simple_spinner_dropdown_item and android.R.layout.simple_spinner_item is that first one will give you nicer user interface with radio button. To ArrayAdapter constructor we must pass array of strings, array of integers won’t work. Before we show UI we set selected item to position 0. Selecting different items will demonstrate how listener works.

Sunday, February 10, 2013

Timer, not it is Handler.postDelayed

In previous post http://grumpyoldprogrammer.blogspot.com/2013/02/text-to-speech.html I managed to init TTS. Now with talkative Android I can go and implement countdown timer. Whole story with SoundPool and TTS was about I need tool to count for me exposure while I am doing manual tracking. Only that I won’t use Timer or CountDownTimer but ordinary Handler. I leave whole code from TTS how it is and change button onClick listener and add couple of variables. Those are new class variables:

private int counter = 60;
private Handler timer;
boolean timerRunning = false;
private Runnable timerTask = new Runnable() {
    public void run() {
        myTTS.speak(Integer.toString(counter), TextToSpeech.QUEUE_FLUSH, null);
        if (counter > 1) {
            counter-=5;
            timer.postDelayed(timerTask, 5000);
        } else {
            counter = 60;
            timerRunning = false;
        }
    }
};


For more elaborate application I would make counter and delay adjustable but now I like it simple. Don’t forget to create instance of Handler. Now new onClick method:

public void onClick(View v) {
    if (!timerRunning) {
        timerRunning = true;
        timer.post(timerTask);
    }
}


If we do not have active cycle, onClick will start new one. Runnable.run will ask TTS to read counter and decrease it for 5 what matches the second parameter in postDelayed and finally reposts self again using Handler. When counter is 0 we reset it back to 60 and clear flag so that onClick listener can be used again. On emulator execution of 60 seconds countdown takes about 60.3 seconds, so it is not very precise but also not unusable.

Saturday, February 9, 2013

Text-To-Speech

This one is also known as TTS and it is present on Android platform since version 1.6. Usage is uncomplicated but initialization is complicated. Whole initialization issue is about can we have desired Language, is it supported on particular phone. To request language we need to implement OnInitListener interface and that one is defined in android.speech.tts.TextToSpeech.OnInitListener, android.speech.tts.TextToSpeech we need to use TTS and we will import both of them. Before setting any language we want to check what is available and that goes in form question and answer. Inside onCreate we will use Intent:

Intent checkTTSIntent = new Intent();
checkTTSIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkTTSIntent, MY_DATA_CHECK_CODE);


Naturally to hear reply we will implement onActivityResult:

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    if (requestCode == MY_DATA_CHECK_CODE) {
        if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
            myTTS = new TextToSpeech(this, this);
        } else {
            Intent installTTSIntent = new Intent();
            installTTSIntent.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
            startActivity(installTTSIntent);
        }
    }
}


Where myTTS is class level variable. If we have TTS data we create TTS instance, otherwise we start installation process which takes user online. Previous two steps are Intent story which is simplest form of IPC on Android platform, everybody should know that. Since we passed as parameter to TTS constructor this as OnInitListener, we should now implement that listener and set language.

public void onInit(int status) {
    if (status == TextToSpeech.SUCCESS) {
        if(myTTS.isLanguageAvailable(Locale.ITALY)==TextToSpeech.LANG_COUNTRY_AVAILABLE)
            myTTS.setLanguage(Locale.ITALY);
        else
            myTTS.setLanguage(Locale.UK);
    } else {
        Toast.makeText(this, "TTS failed!", Toast.LENGTH_LONG);
    }
}


If init was success we check can we have Italian, if Italian is not available we settle for proper English. Now that initialization is done, we can pass strings containing desired speech to TTS and listen:

String story1 = "are you done yet";
String story2 = "when it will be ready";
myTTS.speak(story1, TextToSpeech.QUEUE_FLUSH, null);
myTTS.speak(story2, TextToSpeech.QUEUE_ADD, null);


Obviously we are mimicking project manager. If Italian succeeded, we should use “stai ancora finito” and “quando sarà pronto”, according to Google Translate, but that is not important right now.

Why to use RAW?

Quite frequently that question is repeated in different forms on Google+ Open Source Photography community https://plus.google.com/u/0/communities/110647644928874455108.
For me argument that RAW encodes data into 14 bit per channel vs 8 bit per channel, for JPEG, is more than sufficient. But for non-technical person that is not very convincing. If we add to it fact that internal camera processor in 90% of situations does great job when it creates JPEG and you need quite high level of processing skills to achieve the same starting from RAW, rock solid RAW argument doesn’t look so good. People advocating RAW start to look like Groucho Marx: “Who are you going to believe, me or your lying eyes?”.
So, here is simple example why RAW is good. Camera handling of great difference in luminosity is nowhere near to human eye. With automatic exposure on we will get or dark is too dark or bright is too bright. Offending photo is stored as RAW and I will open it in Darktable. It should be opened by default in darkroom mode. After doing sharpening and lens correction, in correction group (one with broken circle symbol) I switch to basic group. Here is what we got:



After activating overexposed plugin all overexposed parts will become red, like here:



To remedy that I switch on exposure plugin and push slide to unreasonably low -2.61EV. Now everything what is underexposed is blue.



After adjustment of exposure to reasonable -1.42EV we still have some underexposed areas but that is in shade and we can safely ignore it.



Now we can export it and do further processing in GIMP or even we can try exporting different levels of exposure and later do exposure blending.

Wednesday, February 6, 2013

Simple SoundPool example

I needed it for timer which plays beep on every 15 to 30 seconds. Since I am using remote shutter release and doing manual tracking, astrophotography, I can’t check how long exposure is until now and go back to tracking if it is not long enough.
SoundPool is intended to load audio from resources in APK, and MediaPlayer service immediately decodes audio to raw 16 bit PCM which stays loaded in memory. Multiple audio streams can be played simultaneously, they can’t be started simultaneously but they can be overlapping. Anyway, take a look at documentation http://developer.android.com/reference/android/media/SoundPool.html
Doing research on the Web I concluded that Ogg Vorbis is preferred file format to use for audio files and that they should not be longer than few seconds. To convert audio files to Ogg Vorbis I am using Audacity http://audacity.sourceforge.net/ works on all major operating systems.
So here is the code for very simple player:

protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    player = new SoundPool(2, AudioManager.STREAM_MUSIC, 100);
    soundList = new SparseIntArray();
    soundList.put(1, player.load(this, R.raw.biska, 1));
    Button button1 = (Button)findViewById(R.id.button1);
    button1.setOnClickListener(button1OnClickListener);
}


We have SoundPool for max two streams, type is music stream and quality 100%. Next we associate known key with result of loading audio from resource.
To play it we want to set volume so it matches current media volume on target phone:

Button.OnClickListener button1OnClickListener
= new Button.OnClickListener(){
    @Override
    public void onClick(View v) {
        // TODO Auto-generated method stub
        AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
        float curVolume = audioManager.getStreamVolume(AudioManager.STREAM_MUSIC);
        float maxVolume = audioManager.getStreamMaxVolume(AudioManager.STREAM_MUSIC);
        float volume = curVolume/maxVolume;
        player.play(soundList.get(1), volume, volume, 1, 0, 1f);
    }
};


Proper cleanup would consist from calling release() method of instance of SoundPool and setting reference to it to null.
Here is the link to complete example https://docs.google.com/file/d/0B0cIChfVrJ7WRW10SWFmd2ZZUzg/edit?usp=sharing included Ogg Vorbis is about 5 seconds long. If you click twice play button you can “enjoy” simultaneous play of two streams.

Friday, February 1, 2013

Bits and pieces

This is something like what I should say in previous posts but I forgot to do it. If you are aligning stack using Hugin tools, align_image_stack is HDR tool and it can’t handle well significant number of images in stack or significant movement between images. Workaround here is venerable divide and conquer strategy. Divide images into groups of few and stack them group by group. Then at the end stack those between steps. If your exposures are uneven, stack first shorter ones and later add them to longer ones. Specifying stacking order instead of giving asterisk may help.
After every contrast stretching operation keep noise level under control. Best tool around is wavelet denoise which is part of gimp-tool-registry. Don’t overdo denoising you will lose details and sharpnes. Reasonable level is up to 1.25 with residual 0.10.
If you are into photography then adding following repositories may be interesting:

sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo add-apt-repository ppa:pmjdebruijn/darktable-unstable
sudo add-apt-repository ppa:philip5/extra
sudo add-apt-repository ppa:hugin/hugin-builds


They usually contain name of package, except for Philip Johnsson PPA. That one is suggested because it contains Luminance HDR but it also contains about everything else, though others have newer version. When you want to install Luminance HDR specify luminance-hdr and not common qtpfsgui, for qtpfsgui you will get old version from official repo.
For entangle, program for tethered camera control, you download deb from:

http://mirrors.dotsrc.org/getdeb/ubuntu/pool/apps/e/entangle/


that is GetDeb mirror. Pick 32 or 64 bit depending what system you are running.  There are no exotic dependencies for entangle.

In order to increase capacity and avoid loss of data many picture processing tools internally are using 32 bit float per channel. That is perfectly alright, though I do not know for CCD which is digitizing picture into anything better than 16 bit integer per channel. Problem is that you can’t easily see those 32 bit float per channel TIFF images, no many viewers around for them. Popular image stacking program Deep Sky Stacker produces Autosave.tif in that format and you can’t see what it looks like - very irritating. Perfectly capable viewer for many image formats is G’MIC and it can handle 32 bit float per channel TIFF. Once you install it open terminal (command line) and cd to folder with Autosave.tif. Execute:

gmic Autosave.tif

and GUI with Autosave.tif will show up, moving cursor over image you will be able to see values for current pixel. Now if you want to convert those TIFF images for less capable viewers, you can achieve that also using G’MIC. For example channel values will be between 0 and 1, for Deep Sky Stacker. So if we want to convert that into 16 bit integer we simply multiply channel value to 2 on power of 16 minus 1, what is 65535. Naturally we are talking about unsigned integers. So magic formula to convert it to 16 bit TIFF is

gmic Autosave.tif -mul 65535 -c 0,65535 -type ushort -output auto16.tiff

Now we can use almost any viewer to see what it looks like.

Linux Mint and Adobe Shockwave Flash problem

After my GIMP 2.9.1 adventure Mate eventually gave up and I reinstalled Mint. GTK incompatibility was too much for Mate to handle.
Everything was as usual except embedded videos on YouTube played in fast forward mode without producing any sound. Tried alternatives to mint-flashplugin-11 but nothing have changed, so I went installing more important things and left flash problem for later. Then after few days started watching some video and noted that audio is very weak. Went to audio properties, main menu -> Control Center -> Hardware -> Sound and changed settings like this



From HDMI Audio to Built-in Audio and Output volume to 100%. Flash started working properly?! Looks that it uses audio for syncing, most unusual problem and solution as well.