Advanced Lane Detection

In this Advanced Lane Detection project, we apply computer vision techniques to augment video output with a detected road lane, road radius curvature and road centre offset. The video was supplied by Udacity and captured using the middle camera.

sample lane detection result
sample lane detection result

The goals / steps of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image (“birds-eye view”).
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

A jupyter/iPython data science notebook was used and can be found on github Full Project RepoAdvanced Lane Finding Project Notebook (Note the interactive ipywidgets are not functional on github). The project is written in python and utilises numpy and OpenCV.

Camera Calibration

Every camera has some distortion factor in its lens. The known approach to correct for that in (x,y,z) space is apply coefficients to undistort the image. To calculate this a camera calibration process is required.

It involves reading a set of warped chessboard images, converting them into grey scale images before using cv2.findChessboardCorners() to identify the corners as imgpoints.
9x6 Chessboard Corners Detected

If corners are detected then they are collected as image points imgpoints along with a set of object points objpoints; with an assumption made that the chessboard is fixed on the (x,y) plane at z=0 (object points will hence be the same for each calibration image).

In the function camera_calibrate I pass the collected objpoints, imgpoints and a test image for the camera image dimensions. It in turn uses cv2.calibrateCamera() to calculate the distortion coefficients before the test image is undistorted with cv2.undistort() giving the following result.
Original and Undistorted image

Pipeline (Test images)

After camera calibration a set of functions have been created to work on test images before later being used in a video pipeline.

Distortion corrected image

The undistort_image takes an image and defaults the mtx and dist variables from the previous camera calibration before returning the undistorted image.
test image distorted and undistorted

Threshold binary images

A threshold binary image, as the name infers, contains a representation of the original image but in binary 0,1 as opposed to a BGR (Blue, Green, Red) colour spectrum. The threshold part means that say the Red colour channel( with a range of 0-255) was between a threshold value range of 170-255, that it would be set to 1.

A sample output follows.
Sample Threshold Image

Initial experimentation occurred in a separate notebook before being refactored back into the project notebook in the combined_threshold function. It has a number of default thresholds for sobel gradient x&y, sobel magnitude, sober direction, Saturation (from HLS), Red (from RGB) and Y (luminance from YUV) plus a threshold type parameter (daytime-normal, daytime-bright, daytime-shadow, daytime-filter-pavement).

Whilst the daytime-normal threshold worked great for the majority of images there were situations where it didn’t e.g. pavement colour changes in bright light and shadow.

Daytime Normal with noise bright light & pavement change
Daytime Normal with noise bright light & pavement change
Daytime Normal with shadow
Daytime Normal with shadow

Other samples Daytime Bright, Daytime Shadow and Daytime Filter Pavement.

Perspective transform – birds eye view

To be able to detect the road lines, the undistorted image is warped. The function calc_warp_points takes an image’s height & width and then calculates the src and dst array of points. perspective_transforms takes them and returns two matrixes M and Minv for perspective_warp and perpective_unwarp functions respectively. The following image, shows an undistorted image, with the src points drawn with the corresponding warped image (the goal here was straight lines) Distorted with bird's eye view

Lane-line pixel identification and polynomial fit

Once we have a birds eye view with a combined threshold we are in a position to identify lines and a polynomial to draw a line (or to search for points in a binary image).

topdown warped binary image
topdown warped binary image

A histogram is created via lane_histogram from the bottom third of the topdown warped binary image. Within lane_peaks, scipy.signal is used to identify left and right peaks. If just one peak then the max bin either side of centre is returned.

calc_lane_windows uses these peaks along with a binary image to initialise a left and right instance of a WindowBox class. find_lane_window then controls the WindowBox search up the image to return an array of WindowBoxes that should contain the lane line. calc_fit_from_boxes returns a polynomial or None if nothing found.

poly_fitx function takes a fity where
fity = np.linspace(0, height-1, height) and a polynomial to calculate an array of x values.

The search result is plotted on the bottom left of the below image with each box in green. To test line searching by polynomial, I then use the left & right WindowBox search polynomials as input to calc_lr_fit_from_polys. The bottom right graphic has the new polynomial line draw with a blue search window (relates to polynomial used for the search from WindBoxes) that was used overlapping with a green window for the new.

Warped box seek and new polynomial fit
Warped box seek and new polynomial fit

Radius of curvature calculation and vehicle from centre offset

In road design, curvature is important and its normally measured by its radius length. For a straight line road, that value can be quite high.

In this project our images are in pixel space and need to be converted into meters. The images are of US roads and I measured from this image the distance between lines (413 pix) and the height of dashes (275 px). Lane width in the US is ~ 3.7 meters and dashed lines 3 metres. Thus xm_per_pix = 3.7/413 and ym_per_pix = 3./275 were used in calc_curvature. The function converted the polynomial from pixel space into a polynomial in meters.

To calculate the offset from centre, I first determined where on the x plane, both the left lx and right rx lines crossed the image near the driver. I then calculated the xcentre of the image as the width/2. The offset was calculated such (rx - xcenter) - (xcenter - lx) before being multiple by xm_per_pix.

Final pipeline

I decided to take a more python class based approach once I progressed through this project. Inside the classes, I called the functions mentioned previously. The classes created were:

  • Lane contains image processing, final calculations for view drawing and reference to left and right RoadLines. It also handled searching for initial lines, recalculations and reprocessing a line that was not sane;
  • RoadLine contains a history of Lines and associated curvature and plotting calculations using weighted means; and
  • Line contains detailed about the line and helper functions

Processing is triggered by setting the Lane.image variable. Convenient property methods Lane.warped, Lane.warped_decorated, lane.result and lane.result_decorated return processed images. It made it very easy to debug output using interactive ipywidgets (which don’t work on github)

Sample result images

lane.result_decorated
lane.result_decorated
Lane.warped_decorated
Lane.warped_decorated

Pipeline (Video)

Using moviepy to process the project video was simple. I also decorated the result with a frame count. The Project Video Lane mp4 on GitHub, contains the result (YouTube Copy)

Discussion

Problems/Issues faced

To some degree, I got distracted with trying to solve the issues I found in my algorithm with the challenge videos. This highlighted, that I need to improve my understanding of colour spaces, sobel and threshold combinations.

I included a basic algorithm to remove pavement colours from the images using a centre, left and right focal point. I noticed that the dust colour on the vehicle seemed to be also in the road side foliage. This however wasn’t sufficient to remove all pavement colour and didn’t work when there was a road type transition. It was very CPU intensive.

In the end, I used a combination of different methods, that used a basic noise filter on warped binary images to determine, if it was sufficient to look for a line or not. If it wasn’t it tried the next one, with the final being a vertical rectangle window crawl down the image. Where the best filter was determined for each box. Again this was CPU intensive, but worked.

Another issue faced was using the previous curvature radius to determine if this line was sane or not. The values were too jittery and when driving on a straight line, high. I decided not to pursue this.

Opportunities for improvement in the algorithm/pipeline

There is room here for some refactoring into a more Object oriented approach. This was not evident at the start of the project as to how it should be structured. I experimented a little with using Pool from multiprocessing to parallelise left and right lane searches. It didn’t make it into my final classes as for normal line searching using a polynomial, as I did not ascertain if the multiprocessing overhead, outweighed the parallelism value. Certainly potential here to use a more functional approach to give the best runtime options for parallelisation.

Other areas, include automatically detecting the src points for warp, handling bounce in the road and understanding surface height (above road) of the camera and its impact.

I thought also as I’ve kept history, I could extend the warp to include a bird’e eye representation of the car on the road and directly behind it. I did mean averaging on results for smoothing drawn lines, but this was not included in the new line calculations from the next image frames.

The algorithm could also be made to make predictions about the line when there is gaps. This would be easier with continuous lines then dashed.

Hypothetical pipeline failure cases

Pavement fixes and/or combined with other surfaces that create vertical lines near existing road lines.

It would also fail if there was a road crossing or a need to cross lanes or to exit the freeway.

Rain and snow would also have an impact and I’m not sure about night time.

Tail gating a car or a car on a tighter curve would potentially interrupt the visible camera and hence line detection.

Advertisements

Clone Driving Behaviour

Clone driving behaviour using Deep Learning

With this behaviour cloning project, we give steering & throttle instruction to a vehicle in a simulator based on receiving a centre camera image and telemetry data. The steering angle data is a prediction for a neural network model trained against data saved from track runs I performed.
simulator screen sot

The training of the neural net model, is achieved with driving behaviour data captured, in training mode, within the simulator itself. Additional preprocessing occurs as part of batch generation of data for the neural net training.

Model Architecture

I decided to as closely as possible use the Nvidia’s End to End Learning for Self-Driving Cars model. I diverged by passing cropped camera images as RGB, and not YUV, with adjusting brightness and by using the steering angle as is. I experimented with using 1/r (inverse turning radius) as input but found the values were too small (I also did not know the steering ratio and wheel base of the vehicle in the simulator).

Additional experimentation occurred with using comma.ai, Steering angle prediction model but the number of parameters was higher then the nvidia model and it worked off of full sized camera images. As training time was significantly higher, and initial iterations created an interesting off road driving experience in the simulator, I discontinued these endeavours.

The model represented here is my implementation of the nvidia model mentioned previously. It is coded in python using keras (with tensor flow) in model.py and returned from the build_nvidia_model method. The complete project is on github here Udacity Behaviour Cloning Project

Input

The input is 66x200xC with C = 3 RGB color channels.

Architecture

Layer 0: Normalisation to range -1, 1 (1./127.5 -1)

Layer 1: Convolution with strides=(2,2), valid padding, kernel 5×5 and output shape 31x98x24, with elu activation and dropout

Layer 2: Convolution with strides=(2,2), valid padding, kernel 5×5 and output shape 14x47x36, with elu activation and dropout

Layer 3: Convolution with strides=(2,2), valid padding, kernel 5×5 and output shape 5x22x48, with elu activation and dropout

Layer 4: Convolution with strides=(1,1), valid padding, kernel 3×3 and output shape 3x20x64, with elu activation and dropout

Layer 5: Convolution with strides=(1,1), valid padding, kernel 3×3 and output shape 1x18x64, with elu activation and dropout

flatten 1152 output

Layer 6: Fully Connected with 100 outputs and dropout

Layer 7: Fully Connected with 50 outputs and dropout

Layer 8: Fully Connected with 10 outputs and dropout

dropout was set aggressively on each layer at .25 to avoid overtraining

Output

Layer Fully Connected with 1 output value for the steering angle.

Visualisation

Keras output plot (not the nicest visuals)

Data preprocessing and Augmentation

The simulator captures data into a csv log file which references left, centre and right captured images within a sub directory. Telemetry data for steering, throttle, brake and speed is also contained in the log. Only steering was used in this project.

My initial investigation and analysis was performed in a Jupyter Notebook here.

Before being fed into the model, the images are cropped to 66×200 starting at height 60 with width centered – A sample video of a run cropped.

Cropped left, centre and right camera image
Cropped left, centre and right camera image

As seen in the following histogram a significant proportion of the data is for driving straight and its lopsided to left turns (being a negative steering angle is left) when using data generated following my conservative driving laps.
Steering Angle Histogram

The log file was preprocessed to remove contiguous rows with a history of >5 records, with a 0.0 steering angle. This was the only preprocessing done outside of the batch generators used in training (random rows are augmented/jittered for each batch at model training time).

A left, centre or right camera was selected randomly for each row, with .25 angle ( for left and – for right) applied to the steering.

Jittering was applied per Vivek Yadav’s post to augment data. Images were randomly transformed in the x range by 100 pixels and in the y range by 10 pixels with 0.4 per xpixel adjusted against the steering angle. Brightness via a HSV (V channel) transform (.25 a random number in range 0 to 1) was also performed.
jittered image

During batch generation, to compensate for the left turning, 50% of images were flipped (including reversing steering angle) if the absolute steering angle was > .1.

Finally images are cropped per above before being batched.

Model Training

Data was captured from the simulator. I drove conservatively around the track three times paying particular attention to the sharp right turn. I found connecting a PS3 controller allowed finer control then using the keyboard. At least once I waited till the last moment before taking the turn. This seems to have stopped the car ending up in the lake. Its also helped to overcome a symptom of the bias in the training data towards left turns. To further offset this risk, I validated the training using a test set I’d captured from the second track, which is a lot more windy.

Training sample captured of left, centre and right cameras cropped

Center camera has the steering angle and 1/r values displayed.

Validation sample captured of left, centre and right cameras cropped

Center camera has the steering angle and 1/r values displayed.

The Adam Optimizer was used with a mean squared error loss. A number of hyper-parameters were passed on the command line. The command I used looks such for a batch size of 500, 10 epochs (dropped out early if loss wasn’t improving), dropout at .25 with a training size of 50000 randomly augmented features with adjusted labels and 2000 random features & labels used for validation

python model.py --batch_size=500 --training_log_path=./data --validation_log_path=./datat2 --epochs 10 \
--training_size 50000 --validation_size 2000 --dropout .25

Model Testing

To meet requirements, and hence pass the assignment, the vehicle has to drive around the first track staying on the road and not going up on the curb.

The model trained (which is saved), is used again in testing. The simulator feeds you the centre camera image, along with steering and throttle telemetry. In response you have to return the new steering angle and throttle values. I hard coded the throttle to .35. The image was cropped, the same as for training, then fed into the model for prediction giving the steering angle.


steering_angle = float(model.predict(transformed_image_array, batch_size=1))
throttle = 0.35

Successful run track 1

Successful run track 1

Successful run track 2

Successful run track 2

note: the trained model I used for the track 1 run, is different to the one used to run the simulator in track 2. I found that the data I originally used to train a model to run both tracks, would occasionally meander on track 1 quite wildly. Thus used training data to make it more conservative to meet requirements for the projects.

My first lane detection algo

I’m all for practical learning by building things. There’s nothing like getting stuck into a project and seeing results. Whilst a little progress is a good motivator, it also shows you how much it is that you don’t know.

I was pleased with the results of my first project doing the Udacity Self Driving Car Engineer Nanodegree. Yet what was more pleasing was being shown how much experimentation is really required. That is that is so much to learn.

This first module was about understanding some of the principles of computer vision that apply. We first started with Canny Edge Detection

and then Hough transform to detect lines within a region of interest.

The first project was to apply this learning, first to set of static images and then to a couple of videos captured whilst driving on a highway.

How cool is the final result .

I submitted my Jupyter Notebook for review. To pass you need to ensure that you meet specification. I passed here is the review feedback.

Some of my reflection thoughts on what can be improved in a future iteration of the project:

  • look for the road horizon starting from bottom centre of the image working up – asphalt has a fairly unique colour
  • break ROI into left and right lanes earlier – seems that at least with driving on a highway without lane changing that we can assume with confidence where they should start at the base of the image
  • segment each ROI into chained vertical blocks of a smaller width
  • when drawing connect intersection of cv2.fitLine lines
  • increase number of segments if lanes are curving left or right
  • label the lines with colour and type – continuous, dashed etc
  • feed the previous result into the evaluation of the next image
  • determine when an image has no lanes that could be considered reasonable
  • lane changing and entering a lane from a curb needs more though
  • if using smaller more specific left and right lane ROIs should allow for following a vehicle
  • not sure how rain affects this – might have to do a test and capture video in a tropical down pour this storm season
  • this approach wouldn’t work in snow. would require a different approach

Its still early yet in the nano degree but I’m hooked already. Happy coding and driving.

Switching off from Aussie innovation for the time being …

The slow dawn of reality has crept into my thinking, that what I’m presently witnessing, is the rise of politically correct innovation within Australia. That is, that there is a rush on, to be positioned, to secure funding and “innovation wash” existing service offerings, ready for when government programs come into affect.

My high hopes for an ideas boom have been dashed somewhat of late. Not so much from the intent, but from the reality, that the intent does not match reality. There is significant education (dare I say re-education) required.

Let me show you what I mean.

If we look at the innovation website Business.gov.au. (their definition here ), it basically suggests, that innovation is about change. It follows:

What is innovation?

Innovation generally refers to changing or creating more effective processes, products and ideas, and can increase the likelihood of a business succeeding. Businesses that innovate create more efficient work processes and have better productivity and performance.

Now if we look at the wikipedia innovation article  it suggests, The term “innovation” can be defined as something original and more effective and, as a consequence, new, that “breaks into” the market or society.

Innovation is a new idea, or more-effective device or process.[1] Innovation can be viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.[2] This is accomplished through more-effective products, processes, services, technologies, or business models that are readily available to markets, governments and society.

So from my perspective the first one, is inwards looking, about the term innovation (as in thats an innovative idea) to do business change or continuous improvement.

I’ve often argued that changing a business process to make it more effective is not innovation. However if that idea, is bought to market, as a new product or services offering, then it is innovation ie there has to be diffusion into a market or society.

Now if we look at the slick new marketing or education material (your viewpoints may differ on this) being produced by National Innovation & Science Agenda Australian Government it refers to how Australians have been good at Ideas, but now we need to get better at commercialising –  turning those ideas, into new products or services.

As you can see, there is a long road ahead, with a lot of jargon presently such as “Ideas Boom”. It will take some time for people, to agree on what things mean (even though they have great definitions available now) and to reach consensus. Then decisions will need to be made about how much capital is to be made available and under what investment thesis it will be allocated.

There are a lot of people shouting about a number of things surrounding these topics and if your not shouting the politically correct message too then no matter how novel and disruptive your idea or invention is, it may not benefit from the “Ideas Boom”. If this is effecting you, jump on a plane and go to Silicon Valley or elsewhere that may be appropriate.

I keep hearing about the lack of opportunity here in Australia, in many fields on podcasts I listen to occasionally. On those podcasts, when people ask eminent panel members about their thoughts on the subject, invariably, their answer will be that we do hope you stay and help drive the next generation. It always surprises me, assuming that these persons are in tenured positioned, how devoid their responses seem to be, from the reality of needing money (or some may say capital) and support to do so. Its this later bit, that’ll take so long to grow here in Australia. It may also require a generational change. The notion of not taking risks in some is the antithesis of what is required in an “ideas boom” era.

So I’m thinking of slowly fading away from observing and commenting on all of this, until I need something concrete from it. Presently it all seems to be a nice discussion, but discussion is after all discussion and not tangible outcomes.

 

Deflect, defer elsewhere and finally block – councillors answer to innovation

No one likes to be blocked on twitter. My first reaction to being blocked by William Owen-Jones, a local Councillor on the Gold Coast, was that he’s a sore looser and I had won. He’d just thrown in the towel. I was sort of rejoicing. However, there was more to this, I must have really hit a few bad nerves. This happens when cultures and what one values as important are worlds apart. Tweets_with_replies_by_William_Owen-Jones___WOJgoldcoast____Twitter

As can be seen in the photo, attached to this article, William Owen-Jones threw this blocking straight in my face. I never swore or called him bad names and tried very hard not to be rude. I asked questions and responded to tweets directed at me. I’m not really sure that this is acceptable behaviour for a public profile – which can be seen here.

One may also argue, that this blog post, is doing the same thing – throwing it back in his face. However, I’ve been thinking for a while, should I write it or not. Clearly, I’ve decided to do so, because the little online incident shows just how much work is required in the city I live presently, being the Gold Coast Australia, to change attitudes. There is no real tech/entrepreneurial culture here, outside of a small few pockets; whereby those presently classified as leaders have had little, to no real exposure to an innovative tech culture based around startups; nor to large groups of techies & programmer types.

Twitter is the place where you can communicate with persons that you normally wouldn’t. In the case of an elected officials, twitter acts as a conduit through which those persons can engage more readily to find out what constituents needs and wants are. But this works both ways in that constituents can find out the machinations behind the public office. When a public official blocks – it just says, don’t talk to me unless you agree with what I’m going to tell you. For intelligent and inquiring people, thats just so wrong.

I, like so many other techies, in Australia, have been amazed over the last decade or so at the very conservatism in our leaders, at all levels in public and private sectors, regarding technology and innovation. It was the later, that I was really pushing the councillor on. I wanted to know what the local council was doing and how they intended to respond to the recent statements by Australia’s new Prime minister, Malcolm Turnbull that innovation is at the forefront of Australia’s economic agenda – opinion piece can be found here. An excerpt from Malcolm’s speech

“We have to work more agilely, more innovatively, we have to be more nimble in the way we seize the enormous opportunities that are presented to us. We’re not seeking to proof ourselves against the future. We are seeking to embrace it,” Malcolm Turnbull said during ministry announcement speech.

What I found was that I was deflected to things such as a press release to run for office (yea right) Tweets_with_replies_by_William_Owen-Jones___WOJgoldcoast____Twitter

Or I was deferred to other levels of government.

But I wanted to know  “what the local Council policies were?” and “What were they doing?”. Unlike other councils around Australia e.g. Sunshine Coast , there appeared to be no policies. There also seemed to be no understanding of what or how local council, in particular the GCCC, will play a potential role (or like to see it).

It also became clear, that what innovation meant was not well understood by Will – he deferred to some BRW (Business Review Weekly), an Australian publication, definition as “change that adds value”. Thats just continuous improvement. I referred him to Clayton Christensen and disruptive innovation. However that was lost on him ….. no response.

There was no distinction between old school internal enterprise IT and innovation (R&D, entrepreneurship and commercialisation of novel IP) that I could perceive. They seemed to be the same thing??? This really surprised me.

I tried to suggest its time to stop following and to start leading. Clearly by pushing for answers and calling BS on him, I just highlighted his ignorance.

I feel quite strongly, after seeing so many colleagues and associates leave the Gold Coast, for greener pastures, that the GCCC needs to address this question of innovation in a more professional, thought out way. The existing approach is just ticking a few boxes. Old players are protecting turfs (and budgets and reputations).

Its time for some renewal. They can’t just keep blocking it out!!

Renewed Australian tech innovation – will it keep our best and brightest home

There are apparently 20,000+ Australian tech entrepreneurs in the San Francisco/Silicon Valley area. They left Australia, as there is limited ability to pursue what they are passionate about here. The government and business environment, hampered by conservatism and a risk adverse culture, stifles their creativity and throttles funding for early stage commercialisation.

The Australian tech entrepreneur community is rejoicing this week. Finally we have a Prime Minister, Malcolm Turnbull who is a friend. He is placing innovation and technology at the forefront of Australia’s political agenda. Its a brave move, with significant inertia in all levels of government and business needing to be addresses. Can it be overcome? Will anything change?

These are hard questions to answer succinctly in this post. My initial rejoicing, the same as others, has been quickly bought back to one of the usual lethargy about pursing activity in Australia. The Australian business executive, politicians and government people, appear to be IMHO, living in this dream world, that they have nothing to fear about technology disruption. I believe this, like many other entrepreneurs to be a misguided view.

In the past, Australia as a country, has had economic success from primary production and resources. The last decade seeing a mining boom, fuelled by Iron ore, being supplied to China, producing significant wealth. It enabled the economy to ride out the GFC (Mk I & Mk II). However unlike in other parts of the world, it didn’t clean out inefficient businesses. Its too much to go into the detail here about how technology in the US, reduced the need to rehire to same employment levels and beyond once the GFC was over. They just kept the same number of employees as they recovered i.e. they became more productive.

Australia has one of the worst productivity ratios of any OECD country. The economy now is fuelled by banking and property (Sydney and Melbourne prices have not burst and the ratio of house prices to wages means many simply will not own one in their lifetimes). But this thinking that bricks & mortar, property is a safe bet, permeates throughout levels of execs be it private or public sector.

There are generations in the Australian work forces, that have never bought to market novel invention, which is the crux of innovation. Such that if you try to have a meaningful discussion, they block everything out. They just don’t have the knowledge or vocabulary to talk about it. Even worse they think its SEP (Someone Else’s Problem) so they’ll just sit back and wait to find out. In my eye, its just so wrong.

There are major issues ahead for the federal government, and I keep hearing that communication will be a key part of any policy. Yet I fear, that as an economy we do not have the time or budget to re-educate/bring along these people. Most just like things the way they are now. If we leave it that way, our lifestyles will continue to suffer and the real value of our wages will also continue to decline.

Whats to be done with them? As a nation do we just increase welfare and accept that unemployment will continue to rise?

It does sound harsh, but the smart ones, have already left our shores (Many more want to follow but can’t for various reasons). They won’t want to come back to work with people, that just don’t get what they are talking about. They’ll want people, that can openly debate and discuss the matter at hand, in an intelligent way.

If we invest more in innovation, science and entrepreneurship in Australia, will we attract any of the 20,000 back. Maybe a few, but that’d be natural. Having visited the Bay Area a few times, I feel that the government has a lot of work to even start to come close. Presently it just appears to be euphoria – we’ll need to see actual dollars being invested and attitudes changed. Hopefully, a new batch of political leaders will also be installed to help pave the way. So it’ll take a while.

A tempest is brewing with Apache Storm – whats going on there?

I’ve lost a bit of faith in the Apache Foundation of late. I understand that a project that is in incubation is still getting its house in order but how long should that take, how much confidence should one put in interim releases and when should you just throw your hands up in the air and say enough is enough, time to find another tool?

My Apache Storm cluster blew up for no apparent reason. I spent days, maybe a week or more debugging esoteric problems that simple test cases for common use cases, should have resolved for an upgrade and also for normal development and testing runs.

After visiting San Francisco in early August, I fired up my Apache Hadoop, Zookeeper, Hbase, Kafka and Storm cluster to crunch some data that I’d collected. It’d been maybe three weeks since the last time. The cluster started but my topology would not run. There was some issue with a Kafka offsets. A few days later, after checking all my maven POMs and dependencies, recompiling, chasing through multiple log files and re-deploying countless times, I came to the realisation that there was nothing actually wrong with my code. Many google searches later, I worked out from one vague little reference, that the Kafka Spout Zookeeper entry was causing a continuous reset of the Kafka Spout to a point that could never be found in the Kafka queue. It was not going to resolve itself. A little while later, I removed the offending Zookeeper entries and the topology started moving. Only to then have other issues to do with timeouts betweens workers and the nimbus supervisor.

This was taking me days to resolve. I kept seeing others having issues with the Apache Storm Incubating 0.9.2 release. Some where similar to mine, others were the normal noob questions posed in user groups. I kept thinking to myself, what is going on there with the Apache Storm Incubating group?

Why aren’t they resolving these common issues being experienced by many? Or at least publishing a fix that doesn’t involve patching the source code and recompiling to create a new distribution. The average developer and smaller shops & startups just don’t want these hassles, nor have the time or skills to be focusing on debugging a release. Like I was eluding earlier, I think people put faith in these releases, as being of a certain quality.

I did see a note, coming through a while back stating that the Apache Storm Incubating committee where looking to introduce Release Candidates before a final release. This is something I’d like to see, but I’d also like to see, a deemphasis of features and bringing in code branches, that have been worked on in isolation eg Yahoos YARN version of Hadoop without the appropriate support. This support should be to make the project, a success as an Apache project. That is, just dropping the code base and saying here it is, isn’t enough.

Clearly there is a lot of interest, in Apache Storm, within the broader Big Data community. There is a lot of good will and people investing significant energy learning and trying to leverage the software. However from my perspective I can’t see the Apache Foundation and the Apache Storm Incubating committee rewarding that faith. The current release is just too unstable and the user community wasting too much time on debugging esoteric problems with no readily available fix.

Releases are taking too long, common issues that make the product usable aren’t being addressed appropriately through the main Apache Storm incubating website http://storm.incubator.apache.org/. Its presently eroding community good will and support to use the product. I get the distinct feeling that the whole process is under resourced and some of the key players, are lost with new features and future releases.

Here in ends my rant. C’mon guys, get what you have working and stabilise it before introducing new features. I want to use Apache Storm, but presently I can’t and no I’m not going to learn Clojure to help.