Beijing Air Quality: Making of the movies
The movies shown here involved stitching together hundreds of images of the tower shown above. This turned out to be more complicated that it sounds since each image was not exactly aligned, often due to strong winds on the days where the tower is actually visible.
To align and combine these images into a movie a series of
python scripts were used. A description of each stage of the operation is given below and the
python script is made available in case anyone else finds these useful.
Not every photo I took can be used in the final movies. This is typically due to the background light level. To try to standardise this only photos taken between 8:30 and 11:30 am are selected using the following script. This script simply copies some files from the raw/ directory to the select/ directory based on their times. All files have the format YYYYMMDD_TIME_AQI.jpg, where YYYY is the year, MM the month, DD the day, TIME the time in 24 hour clock and AQI the air quality index. The air quality index used here is the PM 2.5 value which gives the concentration of particles with size less than 2.5 microns, in units of micro grams per cubic metre.
In anticipation of the alignment phase, a black frame of 300 pixels was added to each raw image. Original images were either (1952, 3264) for 2012 and part of 2013 or (2448, 3264) afterwards, depending on what phone I was using. Position 1 only used the old images size and this script, position 2 and 3 used either so this script was developed. The results were moved to black/ directory for the next phase.
This stage is by far the most complicated. Since the tower can be near-invisible in many photos finding a good target to track is difficult.
For position 1 the lamp post on the left of the photo was used. This is visible on even the most polluted days. The basic method is to find the joint in the bottom support for the light, then translate the image so that it lines up with a standard position. The standard position was chosen by looking at the resulting tower image after stacking all images. To find the target point the background colour is sampled and the blue value stored. The image is then scanned from the left to the right in the middle of the screen until the darker lamp post is hit. The post is then scanned upwards until there is a dark line at the correct angle to the top left, indicating the target joint has been found. The script for doing this is available here.
For position 2 and 3, there is no clear reference point so the tower is used. Firstly the background blue colour is taken from the top of the image. Next the vertical middle is chosen and the horizontal directions are scanned from the left and right to get the edges of the tower. Once the tower has been found the top of the tower is found by checking the minimum colour between the left and right edges as the algorithm moves up the tower. When the colour returns to the background then the top has been found. The
python script is found here.
Examples of aligned images from positions 1, 2 and 3 are shown below.
Once the images are aligned a head up display needs to be added. This takes the aligned images from the align/ directory and outputs to the hud/ directory. The process involves blacking out the area under where the text will be, then adding the text. The associated script is here for position 1 and for positions 2 and 3.
Resize all images from the hud/ directory to have a height of 1024 and output to resize/ directory using this script.
Since most software for stitching images together into movies needs a nice integer sequence of images, this
bash script makes one.
All of the sorted images are combined to make movie using the free software MakeAVI available here. The specific settings for the movies here were 5 frames per second, with MS video 1 compression at 85% quality.