Getting organized

My friend and neighbor Mel is going to help me make my first Dob.  He’s a retired tool-and-die guy, and knows how metal works.   I’ve been studying  David Kriege and Richard Berry’s excellent book on the topic, The Dobsonian Telescope.  Per their recommendations,  we are beginning with the mirror cell, and  between the odds and ends Mel has in his shop and a few items ordered from a surprisingly local metal shop, we may be about to begin.  Mel has cutters and welders, drills and taps and all manner of metal working tools, and the expertise born of 50 odd years of doing it.  If, between Mel’s place and my place, we can find a flat surface large enough to get stuff laid out.  Both of us seem to have problems with too-much-stuff.

using toilet paper rolls as wire organizers

Mel’s Central Organizing Principle

Mel at least has found a way of organizing the myriad of wired dinguses we all seem to accumulate.

Rainbows in the ice

We had all three grandkids this weekend, which is always a recipe for mayhem.  In the backyard, Job had managed to get Rose pretty wet so she was mad and went in.  Then he threw a wooden rubber band gun which hit Bri in the butt. He had to, he was out of ammo!  But she was kinda mad at that.  So I had to read Job the riot act, in the course of which I may have inverted him over the water trough.  In this position, he observed that the water had frozen in a thin clear sheet.  This distracted the both of us, and we proceeded to try to break the ice by punching it.  Which was a mistake because that ice was thicker than it looked, and felt like punching a brick wall.  A quick search found a small rock, and a few quick jabs poked a hole in the ice.  Some cracks formed, and Job said there were rainbows in the ice!

Iridescence in ice

To make sure there was no oily stuff in the water, we took off our gloves and started picking out chunks of ice.  Close up, we could see that the rainbows appear within the ice,  along cracks that went through its thickness. Job discovered optical interference!  Presumably the iridescent colors result from light bouncing around in  the cracks.  Interference makes the colors as light waves reinforce or cancel out each other in the resulting melee.  Wow, that water was cold!  Job and I beat a hasty retreat to get our hands under some warm water.

Newtons rings, interference patterns between glass slides

Inside we made our own rainbows (“Newton’s rings”) with two glass slides.  Like the ice, the two pieces of glass offer closely spaced reflective surfaces for light to bang around between.  In the picture, I’m putting some strain on the slides to give a little curvature and force them close enough together to get good interference.  It’s lit with a compact fluorescent bulb, so the colors are a bit weird, since fluorescents don’t give us a continuous spectrum.

 

 

In search of the flat field

I’ve been thinking about the problem of evaluating my flat field lighting and target, and two approaches came to mind. One is to make some flats, rotate the camera 180 degrees, make more flats, and then measure the difference.  I haven’t done that, but it seems like it should work.  The other method is to photograph the target  in a wide field with my dslr and look at color rendering and the light distribution over the field.

Here’s a stretched look at the master flat I had made.  It actually did a pretty good cosmetic job on the NGC80 image.

stretched maser flatIt shows the vignetting produced by the optical train.  The “donuts” are shadows of dust bunnies on the cover glass of the ccd chip; they’re shaped like that because my optical system has a secondary mirror that forms a central obstruction.  When this image is applied to the image of NGC80, all that crap is magically removed.  So flat fielding  removes artifacts of the optical system and ccd, but requires an evenly lit target to work.  Here’s the DSLR picture of the first try at a target:

first flat field target

First, it’s really red.  I’m using a 15 watt tungsten bulb to illuminate the target, which is bouncing off a white reflector before hitting the white target.  It looks like about a 2800K source, that gets even redder bouncing off the plywood interior of the hut.  Color matters because I will do photometry, and try to get very accurate measurements of star or minor planet brightnesses, and the sensitivity of the chip varies a bit with wavelength.  So I’m aiming at a more “daylight” balance on the theory that that’s a pretty average star color.  Being the color of our star.  Let’s see how even it is.

I converted the image to grey scale and used the horizontal box tool in Maxim to see average values across the image.  You can see that the illumination drops off on the right hand side.  It looks to me like if I can level out the profile, I should have a pretty good target.  The light is falling off because the scope is casting a shadow on the target.  It worked ok with my previous setup, but needs a fix with the new mount.  Here’s the way the setup should look.

I put some tough blue, a lighting gel used to convert tungsten to daylight, on the lamp, and fiddled with the reflectors.  The blue really helps the color balance.

The reflector and target are both white foamcore.  I need to come up with an intelligent mount for the cards, right now they are just taped and wired, and aren’t parallel. Also, I find that the sweet spot for the scope is right on the stud.

 

The illumination still looks pretty blotchy, so back to Maxim for more measurements.

At least the slope is in the opposite direction!  I still have about 10 percent variation across the target.  Next fix is to properly mount the target and reflector, which I think will make it pretty close.

3D with Lytro!

I haven’t been paying as much attention as it deserves, but Lytro has provided two updates to its software that completely fulfill its promise. Jeesh.  I wrote earlier ( http://jjmcclintock.com/wordpress/?p=104 )that Lytro had to offer two things the lightfield tech is capable of but wasn’t yet delivering:  deep focus, and the ability to generate stereo pairs.

First, in October, they came up with a firmware/software update that provided manual controls with the ability to crank up sensitivity for low light, shoot long (up to 8 seconds) exposures, and  a neutral density filter for longer exposures in daylight.  See http://blog.lytro.com/news/new-manual-controls-expand-creative-capabilities/  That is a major move toward professional use, and I didn’t even have the sense to ask for it.

Then another update offerred “Living Perspective”. (Dec 4, 2012:  http://blog.lytro.com/news/its-here-see-your-pictures-with-perspective-shift-and-living-filters/) After a bit of processing, you can access the depth information in the image (including images shot before the update).  In the process, the image snaps into focus in depth.  In one update, deep focus, and stereo info are revealed.

I made a picture of our hens shortly after receiving my Lytro (and before they entered the Gulag).

I thought it was cool you could refocus, but let’s face it, the problem of focus in photography is recovering deep focus from bad focus.  Being able to fool around with focus after the fact is nice, but really not essential.  I still have an image that is mostly out of focus, although it works well here.  The Living Perspective update changes that…it brings the entire image into focus in order to separate the depth planes.  And by shifting the image left and right, reveals the depth information captured in the light field.  So far this  works on their website, but the WordPress pluging doesn’t work yet, but it works great on the desktop.  I assume that will change soon.  There’s not yet a “Lytro” way to export the pairs, but by using  screen capture software like Grab, it’s easy to collect the pair.  I happen to like crossed eye stereo viewing (the “Holmes card”), and here’s the result:

We have a nice stereo pair, in focus from beak to  fence line, shot with a single lens, using light field technology.

Another update Lytro has offered are  their “Living Filters” which are  effects that are applied to the various depth planes of the image and have somewhat different effects as you  shift the perspective.  I particularly like the “8-track” effect which looks to me like  faded Kodacolor.

The apple blossoms worked nicely with the 8-track filter, and remind me of lithographed stereo cards from the early 1900’s.  Because Photoshop is part of the workflow, one could easily add half-tone…

The obvious filter to add is “Anaglyph”.  Dig out those glasses!

I hate anaglyphs.  They don’t work well for me, and they’re ugly without glasses.  And, uh, you need glasses!  But they are are the most common 3D images out there, and it would be dirt simple for Lytro to help you make them.  They also work well in the image size that Lytro exports to, 1080×1080,  They could sell Lytro branded glasses!  The above portrait of Annie the beagle was made in color with Lytro, then split left-right with the Lytro software and Grab to make color left-right pair.  I made those b&w in Photoshop.  These were placed in Anaglyph Workshop (http://www.tabberer.com/sandyknoll/more/3dmaker/anaglyph-software.html) to produce the anaglyph.

If Lytro can scale up a camera  to professional levels, it has great possibilities.  Right now it’s using a maybe 11 megapixel chip to produce a 1.2 megapixel image.  I think that technically they will be stuck with that kind of ratio.  To make larger images is going to require a much larger chip and a big clunky box with big optics.  It’s probably really difficult to have interchangeable optics.  A bigger image would be even more demanding on computer processing power to make the images perform their magic.  It’s really impressive that the  major improvements in the product have been achieved with (free) firmware and software updates.  This is going to be fun to watch!

 

Focus, Flats and NGC 80

No moon, the evening started with thin cirrus clouds that disappeared, I think an hour or so after sunset.  With those kind of clouds, I’m never sure, I think they are actually there to make halos and generally degrade the image, but you can’t see them.  I thought good enough for a focus run with FocusMax.  I did find that the focuser wants to be connected to Maxim or FocusMax, but not both.  So I connected to FocusMax and pretty much just ran the “First Light” routine.  I selected a 4th mag star in Pegasus, and began the run.  The routine built a pretty straight slope from the left, and as the star approached focus, it had an obvious bloom spike.  The curve bottomed, went up, then back down, then up again in a fairly tidy rounded W shape.  FocusMax didn’t like the bloom; the routine failed.  The recommendations were for a “fairly bright star”, mag 3-5.  A nearby 5th mag star had the same result.  But when I ran it with a 6th mag star, the routine ended with a very tidy vee curve.  The wizard then did a sample auto-focus, which was pretty far off.  The curves are written to a system_profile file, which had some old runs in it; I deleted the old runs and made another v curve.  Then the autofocus routine came in fairly close.  I ran about 6 V curves which all looked pretty similar.  After re-reading the documentation, I see I should manually expand the in-out range of the focus run and use smaller intervals, which should provide a more accurate focus.

I shot a luminance series (10- 120 sec) on a  field that looked interesting in TheSky, in what I assumed to be Pegasus, but turned out to be Andromeda. The East side of the Great Square is really right on the border. The field includes NGC 80, and about 22 other galaxies including NGC 81 and 83.  not exactly showcase objects, but a very rich neighborhood!  I think the focus still needs work, and also still have somewhat oval stars in the east-west direction.  Pixinsight actually tidies those up via data rejection.

Field centered on NGC 80; 10 – 120 sec luminance with 12″ Meade LX200 and ST-10XME. Calibrated and stacked in Pixinsight, linear adjustments in Photoshop.

I set up a white foam core flat field target at Park 1 position on the wall of the hut.  It’s illuminated with a 15 watt incandescent lamp in a floor fixture that’s aimed at a white foam core target on the opposite wall.  For remote operation, I can turn on the light via a web switch. (not set up yet)  It gave a well exposed flat field in a 1 second exposure.  It seems like there should be a good way to objectively analyze how flat the field actually is.  The line analysis tool in Maxim showed an apparently symmetrical distribution vertically and horizontally.  Also the color temperature of the illumination is maybe 3200K – a low wattage bulb mixed with bounce from unpainted plywood –  so there are color implications.    Never the less, the flat seems to do a good cosmetic, at least, job of correcting the light images.

Hunting Season

We had our deer hunting season after Thanksgiving.

I’m not a hunter, but we let some friends hunt on the farm as a way of controlling the influx of strangers who often trespass during the season.  We are usually offered some meat in exchange if they have success.  Our Amish friend Daniel got an 8 point buck on Monday, and I stopped over at his house the next Tuesday to pick up some really nice deer sausage.


We got a call from Raymond Miller, an old neighbor who was hunting adjacent land and asked permission to track a deer he had hit but not killed.  It was pretty surprising to be asked, actually, but of course you want the hunter to collect his deer, which is way better than stumbling over a decaying carcass later on.  Jean was talking to a friend at Owl Creek who said he found 5 shot deer on their farm this year.  What a waste.  Anyway it was a rainy Saturday when Raymond called, and he and Jean had old home week.  He told Jean about his 82 year-old mom, Sylvia, who made the national news in the Spring time when she ran down a purse snatcher in the Walmart parking lot.  Here’s that story:

http://www.ksl.com/?nid=711&sid=19759599

Jean says “That sounds like Sylvia”.  I don’t know if Raymond found his deer or not.