Workflow: no progress.
Dec 2, 2005
Earlier this year, I documented my workflow for transferring, processing, and posting my photos to Flickr. Sad to say, I have made little progress since then. New tech such as FlickrFS could perhaps ease my strain; the software is a userspace file system that allows uploading, searching, and downloading of photos directly from Flickr. (Check out the project’s homepage for more details.) Alas, I seem to prefer my old-fashioned step-by-step approach, perhaps similar to massdistraction’s penchant for hand-writing her blog entries in pico/nano. There’s a certain limit to the hand crafting, however… she doesn’t write RSS by hand, and I automate a few steps of my photo process. Yeah.
It all starts with the camera. I walk over to my Ubuntu box (an old dual-CPU P4 733Mhz) and plug in the camera. Lights flash, the red lights go off, and it is automatically mounted. Next, I run a simple shell script to:
- Ensure the camera is mounted on /mnt/tmp
find -fon the mount, looking for JPEG files.
- Process this file list through a Perl script. The
exifcommand is executed to extract the image’s date from the EXIF data my camera writes into each JPEG. This is then used to create a new directory for the images.
- Each image found is copied, and progress is printed.
Granted, this is a convoluted shell / Perl / program mix to get things done, and should really exist only as a process prototyping tool. Nevertheless, once something works it is all too easy to let it become routine. Maybe it is time to pick up Python after all, or perhaps command-line Ruby.
Once these scripts are complete, I have a directory full of plain high-resolution JPEGs from the camera.
Why not RAW?
- RAW shooting on my camera (the Sony DSC-V3) is slow.
- The Sony RAW processing tools are available in Windows only. I run free software.
- My best attempts with
dcrawhave failed to yield results superior to what my camera does effortlessly and automatically.
- RAW is not a standard.
For these reasons, RAW is a waste of my time.
I am a packrat; I tend to keep everything unless it is actively in my way (or filling my computer’s disk—sorry, old porn!). So it goes with my photos. I use about 10% of the photos I shoot, for a variety of reasons: poor exposure, poor framing, bad concept, etc. The most painful throwaway photos are the ones that looked fantastic on the LCD preview screen and terrible on my monitor. So rather than throw away the 90% crap, I simply leave the photos in place and do not process them. If a particular photo is quite bad, I’ll delete it, but only in rare circumstances.
I use gThumb to browse through my pictures, first in thumbnail mode, and then individually. Once I find a picture I like in full-screen mode, it is off to the GIMP.
Bring out the GIMP
I know I’m not the first one to make a Pulp Fiction joke about Linux’s foremost image editor—but that’s how it was named! In the the GIMP I painstakingly adjust color balance and contrast until I get the effect I want. This is an extremely frustrating, obscure art. Mostly I attempt to get the histogram from each color component evenly distributed over the image’s available dynamic range. Sometimes this properly corrects for improper white balance, sometimes it doesn’t.
After I have made an image I’m happy with, I save it out as a TIFF (so I don’t have to re-do all that work), I re-size it down to 800 pixels for its longest dimension and upload to Flickr. This is necessary because I feel that Flickr’s image resizer sucks and the 500 pixel images that most people view will be of inferior quality. Therefore, I oversample and resize using cubic interpolation for a smooth image. In some cases, this can help mitigate jitter; it also allows me to cheat on framing and crop in production. No one is the wiser…. ha ha ha ha!
Now that I have exposed the rickety process behind my photo hobby, perhaps I will find ways to improve it. In the meantime, I think I’ll post my newest pictures to Flickr: