Wednesday, February 13, 2013

Talking Exposure Timer

Is simple talking count-down timer for Android. As title says purpose of it is to help taking pictures where long exposure is required and one can’t look at watch, for example in astrophotography. That is also main reason why user interface is black and red. So, according to my intentions primary users should be people who are doing astrophotography and manual tracking or barn door tracking.
User interface is simple and we got frequency of announcements and duration spinners at the top, start and cancel buttons in the middle and info text at the bottom. If text to speech engine fails to initialize, start button is disabled and appropriate info is displayed. If text to speech engine initializes it will use default language for phone. Cancel is not immediate but is applied on first delayed post, during next announcement of remaining time.
To use it select how frequently you want announcements and how long exposure should be, after delay of 10 seconds, common for DSLR cameras in remote shutter BULB mode, it starts talking and counting down. While count down lasts, application obtains wake lock, screen will go dim but will not go to “sleep”. It is compiled for and tested on Android 2.2 device LG-P500.
Signed APK is available for download from here https://docs.google.com/file/d/0B0cIChfVrJ7WbkhISE5ZdGdSdmM/edit?usp=sharing, download, copy to SD card and open in file manager to install.
Eclipse project is here https://docs.google.com/file/d/0B0cIChfVrJ7WaS1oRl9lTUJUNmc/edit?usp=sharing, download, import in Eclipse and build.


Tuesday, February 12, 2013

Spinner and ArrayAdapter

This one is simplest possible tutorial, but I need Spinner to allow user to set time interval and frequency for countdown timer. So lets explain it, maybe somebody doesn’t know how to use it.
Spinner is control which corresponds to drop-down, like HTML select or Java/Swing JComboBox. When user selects it, it expands, pops-up list view where selection could be made. After selection is made list view disappears and selected item’s value is on collapsed control. Data binding and selection events are handled by ArrayAdapter and AdapterView. In Android SDK is example how to load ArrayAdapter from resource and I will load it from array of strings. Here is complete code:

public class MainActivity extends Activity implements AdapterView.OnItemSelectedListener {
    private static final String[] items={"5", "10", "15", "20", "30"};
    private ArrayAdapter adapter;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        Spinner spinner=(Spinner)findViewById(R.id.spinner1);
        spinner.setOnItemSelectedListener(this);
        adapter=new ArrayAdapter(this, android.R.layout.simple_spinner_item, items);
        adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
        spinner.setAdapter(adapter);
        spinner.setSelection(0);
    }
    @Override
    public void onItemSelected(AdapterView parent, View view, int position, long id) {
        Toast.makeText(this, adapter.getItem(position) + " is selected", Toast.LENGTH_LONG).show();
    }
    @Override
    public void onNothingSelected(AdapterView parent) {
        Toast.makeText(this, "Nothing selected", Toast.LENGTH_SHORT).show();
    }
}


Add to default imports AdapterView, ArrayAdapter, Spinner and Toast. In layout, activity_main.xml I placed one Spinner and accepted default name for it. AdapterView.OnItemSelectedListener is used to implement listeners inside activity and again do it differently from SDK example. Difference between android.R.layout.simple_spinner_dropdown_item and android.R.layout.simple_spinner_item is that first one will give you nicer user interface with radio button. To ArrayAdapter constructor we must pass array of strings, array of integers won’t work. Before we show UI we set selected item to position 0. Selecting different items will demonstrate how listener works.

Sunday, February 10, 2013

Timer, not it is Handler.postDelayed

In previous post http://grumpyoldprogrammer.blogspot.com/2013/02/text-to-speech.html I managed to init TTS. Now with talkative Android I can go and implement countdown timer. Whole story with SoundPool and TTS was about I need tool to count for me exposure while I am doing manual tracking. Only that I won’t use Timer or CountDownTimer but ordinary Handler. I leave whole code from TTS how it is and change button onClick listener and add couple of variables. Those are new class variables:

private int counter = 60;
private Handler timer;
boolean timerRunning = false;
private Runnable timerTask = new Runnable() {
    public void run() {
        myTTS.speak(Integer.toString(counter), TextToSpeech.QUEUE_FLUSH, null);
        if (counter > 1) {
            counter-=5;
            timer.postDelayed(timerTask, 5000);
        } else {
            counter = 60;
            timerRunning = false;
        }
    }
};


For more elaborate application I would make counter and delay adjustable but now I like it simple. Don’t forget to create instance of Handler. Now new onClick method:

public void onClick(View v) {
    if (!timerRunning) {
        timerRunning = true;
        timer.post(timerTask);
    }
}


If we do not have active cycle, onClick will start new one. Runnable.run will ask TTS to read counter and decrease it for 5 what matches the second parameter in postDelayed and finally reposts self again using Handler. When counter is 0 we reset it back to 60 and clear flag so that onClick listener can be used again. On emulator execution of 60 seconds countdown takes about 60.3 seconds, so it is not very precise but also not unusable.

Saturday, February 9, 2013

Text-To-Speech

This one is also known as TTS and it is present on Android platform since version 1.6. Usage is uncomplicated but initialization is complicated. Whole initialization issue is about can we have desired Language, is it supported on particular phone. To request language we need to implement OnInitListener interface and that one is defined in android.speech.tts.TextToSpeech.OnInitListener, android.speech.tts.TextToSpeech we need to use TTS and we will import both of them. Before setting any language we want to check what is available and that goes in form question and answer. Inside onCreate we will use Intent:

Intent checkTTSIntent = new Intent();
checkTTSIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkTTSIntent, MY_DATA_CHECK_CODE);


Naturally to hear reply we will implement onActivityResult:

protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    if (requestCode == MY_DATA_CHECK_CODE) {
        if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
            myTTS = new TextToSpeech(this, this);
        } else {
            Intent installTTSIntent = new Intent();
            installTTSIntent.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
            startActivity(installTTSIntent);
        }
    }
}


Where myTTS is class level variable. If we have TTS data we create TTS instance, otherwise we start installation process which takes user online. Previous two steps are Intent story which is simplest form of IPC on Android platform, everybody should know that. Since we passed as parameter to TTS constructor this as OnInitListener, we should now implement that listener and set language.

public void onInit(int status) {
    if (status == TextToSpeech.SUCCESS) {
        if(myTTS.isLanguageAvailable(Locale.ITALY)==TextToSpeech.LANG_COUNTRY_AVAILABLE)
            myTTS.setLanguage(Locale.ITALY);
        else
            myTTS.setLanguage(Locale.UK);
    } else {
        Toast.makeText(this, "TTS failed!", Toast.LENGTH_LONG);
    }
}


If init was success we check can we have Italian, if Italian is not available we settle for proper English. Now that initialization is done, we can pass strings containing desired speech to TTS and listen:

String story1 = "are you done yet";
String story2 = "when it will be ready";
myTTS.speak(story1, TextToSpeech.QUEUE_FLUSH, null);
myTTS.speak(story2, TextToSpeech.QUEUE_ADD, null);


Obviously we are mimicking project manager. If Italian succeeded, we should use “stai ancora finito” and “quando sarĂ  pronto”, according to Google Translate, but that is not important right now.

Why to use RAW?

Quite frequently that question is repeated in different forms on Google+ Open Source Photography community https://plus.google.com/u/0/communities/110647644928874455108.
For me argument that RAW encodes data into 14 bit per channel vs 8 bit per channel, for JPEG, is more than sufficient. But for non-technical person that is not very convincing. If we add to it fact that internal camera processor in 90% of situations does great job when it creates JPEG and you need quite high level of processing skills to achieve the same starting from RAW, rock solid RAW argument doesn’t look so good. People advocating RAW start to look like Groucho Marx: “Who are you going to believe, me or your lying eyes?”.
So, here is simple example why RAW is good. Camera handling of great difference in luminosity is nowhere near to human eye. With automatic exposure on we will get or dark is too dark or bright is too bright. Offending photo is stored as RAW and I will open it in Darktable. It should be opened by default in darkroom mode. After doing sharpening and lens correction, in correction group (one with broken circle symbol) I switch to basic group. Here is what we got:



After activating overexposed plugin all overexposed parts will become red, like here:



To remedy that I switch on exposure plugin and push slide to unreasonably low -2.61EV. Now everything what is underexposed is blue.



After adjustment of exposure to reasonable -1.42EV we still have some underexposed areas but that is in shade and we can safely ignore it.



Now we can export it and do further processing in GIMP or even we can try exporting different levels of exposure and later do exposure blending.

Wednesday, February 6, 2013

Simple SoundPool example

I needed it for timer which plays beep on every 15 to 30 seconds. Since I am using remote shutter release and doing manual tracking, astrophotography, I can’t check how long exposure is until now and go back to tracking if it is not long enough.
SoundPool is intended to load audio from resources in APK, and MediaPlayer service immediately decodes audio to raw 16 bit PCM which stays loaded in memory. Multiple audio streams can be played simultaneously, they can’t be started simultaneously but they can be overlapping. Anyway, take a look at documentation http://developer.android.com/reference/android/media/SoundPool.html
Doing research on the Web I concluded that Ogg Vorbis is preferred file format to use for audio files and that they should not be longer than few seconds. To convert audio files to Ogg Vorbis I am using Audacity http://audacity.sourceforge.net/ works on all major operating systems.
So here is the code for very simple player:

protected void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);
    player = new SoundPool(2, AudioManager.STREAM_MUSIC, 100);
    soundList = new SparseIntArray();
    soundList.put(1, player.load(this, R.raw.biska, 1));
    Button button1 = (Button)findViewById(R.id.button1);
    button1.setOnClickListener(button1OnClickListener);
}


We have SoundPool for max two streams, type is music stream and quality 100%. Next we associate known key with result of loading audio from resource.
To play it we want to set volume so it matches current media volume on target phone:

Button.OnClickListener button1OnClickListener
= new Button.OnClickListener(){
    @Override
    public void onClick(View v) {
        // TODO Auto-generated method stub
        AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
        float curVolume = audioManager.getStreamVolume(AudioManager.STREAM_MUSIC);
        float maxVolume = audioManager.getStreamMaxVolume(AudioManager.STREAM_MUSIC);
        float volume = curVolume/maxVolume;
        player.play(soundList.get(1), volume, volume, 1, 0, 1f);
    }
};


Proper cleanup would consist from calling release() method of instance of SoundPool and setting reference to it to null.
Here is the link to complete example https://docs.google.com/file/d/0B0cIChfVrJ7WRW10SWFmd2ZZUzg/edit?usp=sharing included Ogg Vorbis is about 5 seconds long. If you click twice play button you can “enjoy” simultaneous play of two streams.

Friday, February 1, 2013

Bits and pieces

This is something like what I should say in previous posts but I forgot to do it. If you are aligning stack using Hugin tools, align_image_stack is HDR tool and it can’t handle well significant number of images in stack or significant movement between images. Workaround here is venerable divide and conquer strategy. Divide images into groups of few and stack them group by group. Then at the end stack those between steps. If your exposures are uneven, stack first shorter ones and later add them to longer ones. Specifying stacking order instead of giving asterisk may help.
After every contrast stretching operation keep noise level under control. Best tool around is wavelet denoise which is part of gimp-tool-registry. Don’t overdo denoising you will lose details and sharpnes. Reasonable level is up to 1.25 with residual 0.10.
If you are into photography then adding following repositories may be interesting:

sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo add-apt-repository ppa:pmjdebruijn/darktable-unstable
sudo add-apt-repository ppa:philip5/extra
sudo add-apt-repository ppa:hugin/hugin-builds


They usually contain name of package, except for Philip Johnsson PPA. That one is suggested because it contains Luminance HDR but it also contains about everything else, though others have newer version. When you want to install Luminance HDR specify luminance-hdr and not common qtpfsgui, for qtpfsgui you will get old version from official repo.
For entangle, program for tethered camera control, you download deb from:

http://mirrors.dotsrc.org/getdeb/ubuntu/pool/apps/e/entangle/


that is GetDeb mirror. Pick 32 or 64 bit depending what system you are running.  There are no exotic dependencies for entangle.

In order to increase capacity and avoid loss of data many picture processing tools internally are using 32 bit float per channel. That is perfectly alright, though I do not know for CCD which is digitizing picture into anything better than 16 bit integer per channel. Problem is that you can’t easily see those 32 bit float per channel TIFF images, no many viewers around for them. Popular image stacking program Deep Sky Stacker produces Autosave.tif in that format and you can’t see what it looks like - very irritating. Perfectly capable viewer for many image formats is G’MIC and it can handle 32 bit float per channel TIFF. Once you install it open terminal (command line) and cd to folder with Autosave.tif. Execute:

gmic Autosave.tif

and GUI with Autosave.tif will show up, moving cursor over image you will be able to see values for current pixel. Now if you want to convert those TIFF images for less capable viewers, you can achieve that also using G’MIC. For example channel values will be between 0 and 1, for Deep Sky Stacker. So if we want to convert that into 16 bit integer we simply multiply channel value to 2 on power of 16 minus 1, what is 65535. Naturally we are talking about unsigned integers. So magic formula to convert it to 16 bit TIFF is

gmic Autosave.tif -mul 65535 -c 0,65535 -type ushort -output auto16.tiff

Now we can use almost any viewer to see what it looks like.