3D Perception Project

 

Gazebo PR2 3D Perception
Gazebo PR2 3D Perception

The goal of this project was to create a 3D Perception Pipeline to identify and label the table objects using the PR2 RGBD (where D is Depth) camera.

Exercise 1, 2 and 3 Pipeline Implemented

For this project the combined pipeline was implemented in perception_pipeline.py

Complete Exercise 1 steps. Pipeline for filtering and RANSAC plane fitting implemented.

The 3D perception pipeline begins with a noisy pc2.PointCloud2 ROS message. A sample animated GIF follows:

Noisy Camera Cloud
Noisy Camera Cloud

After conversion to a PCL cloud a statistical outlier filter is applied to give a filtered cloud.

The cloud with inlier ie outliers filtered out follows:

Cloud Inlier Filtered
Cloud Inlier Filtered

A voxel filter is applied with a voxel (also know as leaf) size = .01 to down sample the point cloud.

Voxel Downsampled
Voxel Downsampled

Two passthrough filters one on the ‘x’ axis (axis_min = 0.4 axis_max = 3.) to remove the box edges and another on the ‘z’ axis (axis_min = 0.6 axis_max = 1.1) along the table plane are applied.

Passthrough Filtered
Passthrough Filtered

Finally a RANSAC filter is applied to find inliers being the table and outliers being the objects on it per the following

# Create the segmentation object
seg = cloud_filtered.make_segmenter()

# Set the model you wish to fit
seg.set_model_type(pcl.SACMODEL_PLANE)
seg.set_method_type(pcl.SAC_RANSAC)

# Max distance for a point to be considered fitting the model
max_distance = .01
seg.set_distance_threshold(max_distance)

# Call the segment function to obtain set of inlier indices and model coefficients
inliers, coefficients = seg.segment()

# Extract inliers and outliers
extracted_inliers = cloud_filtered.extract(inliers, negative=False)
extracted_outliers = cloud_filtered.extract(inliers, negative=True)
cloud_table = extracted_inliers
cloud_objects = extracted_outliers

Complete Exercise 2 steps: Pipeline including clustering for segmentation implemented.

Euclidean clustering on a white cloud is used to extract cluster indices for each cluster object. Individual ROS PCL messages are published (for the cluster cloud, table and objects) per the following code snippet:

    # Euclidean Clustering
    white_cloud = XYZRGB_to_XYZ(cloud_objects)
    tree = white_cloud.make_kdtree()

    # Create a cluster extraction object
    ec = white_cloud.make_EuclideanClusterExtraction()

    # Set tolerances for distance threshold
    # as well as minimum and maximum cluster size (in points)
    ec.set_ClusterTolerance(0.03)
    ec.set_MinClusterSize(30)
    ec.set_MaxClusterSize(1200)
    # Search the k-d tree for clusters
    ec.set_SearchMethod(tree)
    # Extract indices for each of the discovered clusters
    cluster_indices = ec.Extract()

    # Create Cluster-Mask Point Cloud to visualize each cluster separately
    #Assign a color corresponding to each segmented object in scene
    cluster_color = get_color_list(len(cluster_indices))

    color_cluster_point_list = []

    for j, indices in enumerate(cluster_indices):
        for i, indice in enumerate(indices):
            color_cluster_point_list.append([white_cloud[indice][0],
                                            white_cloud[indice][1],
                                            white_cloud[indice][2],
                                            rgb_to_float(cluster_color[j])])

    #Create new cloud containing all clusters, each with unique color
    cluster_cloud = pcl.PointCloud_PointXYZRGB()
    cluster_cloud.from_list(color_cluster_point_list)

    # Convert PCL data to ROS messages
    ros_cloud_objects = pcl_to_ros(cloud_objects)
    ros_cloud_table = pcl_to_ros(cloud_table)
    ros_cluster_cloud = pcl_to_ros(cluster_cloud)

    # Publish ROS messages
    pcl_objects_pub.publish(ros_cloud_objects)
    pcl_table_pub.publish(ros_cloud_table)
    pcl_cluster_pub.publish(ros_cluster_cloud)
/pcl_objects
/pcl_objects
/pcl_table
/pcl_table
/pcl_cluster
/pcl_cluster

Complete Exercise 3 Steps. Features extracted and SVM trained. Object recognition implemented.

Features were captured in the sensor_stick simulator for [‘biscuits’, ‘soap’, ‘soap2’, ‘book’, ‘glue’, ‘sticky_notes’, ‘snacks’, ‘eraser’] model names with 40 sample of each captures.

hsv color space was used a combination of color and normalised histograms per

# Extract histogram features
chists = compute_color_histograms(sample_cloud, using_hsv=True)
normals = get_normals(sample_cloud)
nhists = compute_normal_histograms(normals)
feature = np.concatenate((chists, nhists))
labeled_features.append([feature, model_name])

The colour histograms where produced with 32 bins in the range (0, 256) and the normal values with 32 bins in the range (-1, 1.).
The full training.set was used in train_svm.py where I replaced the standard sum.SVC(kernel='linear') classifier with a Random Forest based classifier.

clf = ExtraTreesClassifier(n_estimators=10, max_depth=None,
                           min_samples_split=2, random_state=0)

It dramatically improved training scores per the following normalised confusion matrix:

normalised confusion matrix
normalised confusion matrix

The trained model.sav was used as input into the perception pipeline where for each cluster found in the point cloud, histogram features were extracted as per the training step above and used in prediction and added to a list of detected objects.

# Make the prediction, retrieve the label for the result
# and add it to detected_objects_labels list
prediction = clf.predict(scaler.transform(feature.reshape(1, -1)))
label = encoder.inverse_transform(prediction)[0]

Pick and Place Setup

test1.world

labeled test1 output
labeled test1 output

output_1.yaml

test2.world

labeled test2 output
labeled test2 output

output_2.yaml

test3.world

labeled test3 output
labeled test3 output

output_3.yaml

Reflection

It was interesting to learn about using point clouds and to learn this approach. I found occasionally there was some false readings. In addition few of the objects were picked and placed in the crates (the PR2 did not seem to grasp them properly). This may mean that further effort is needed to refine the centroid selection of each object.

Whilst I achieved average ~90% accuracy, across all models, on the classifier training, with more time spent, I would have liked to have achieved closer to 97%. This would also improve those false readings. I’m also not sure I fully understand the noise introduced in this project from the PR2 RGBD camera.

If I was to approach this project again, I’d be interested to see how a 4D Tensor would work via deep learning using YOLO and/or CNNs. Further research is required.

Advertisements

Not all software development is the same

Having completed, as one of the first, the Udacity Self Driving Car Nano Degree in October 2017, I thought I’d share some of the things I learnt along the way.

Rather than recap the projects, over the three terms, I’m going to focus in this post, on the philosophy and attitude I developed, to complete the nano degree program.

When I was first accepted, my initial reaction was geez have I bitten off more than I can chew. How am I going to cope with the mathematics and the theoretical side. It was a major concern.

Whilst at school I always had excelled at maths, and it was what led me initially into computing at a young age. I used to love writing graphics routines and optimising them. As my maths skills improved I learnt new ways of drawing circles and objects. I can’t remember if I got into Vectors. Yet past school, having started working in corporate IT, I had little use for maths skills besides that which was needed for Accounting. Yes for a number of years, IT dumbed down my maths skills.

IT was more focused on entering data, storing it and reporting on it at some monthly and yearly aggregate levels. Sure I worked on near realtime and mission critical systems but the need for very strong maths skills was limited. It was not a choice of my own, it was just that the technology that did leverage Maths, was perceived as scientific or too risky to adopt by business. It just didn’t have priority or urgency. Or if it was implemented it was a black box, that you supplied some input to, and you just consumed the output.

Getting back to the Self Driving Car Nano Degree, it was these black boxes that were our projects. In the project we needed to create the black boxes, to understand the theory and the mathematics.

Before starting the Nano Degree, I brushed up on matrices and vectors using the Kahn Academy.

Occasionally I got a little stuck on the mathematical proofs but once I understood the code for the maths, I normally was ok. Yes my brain now works off of code, not maths. We experienced some numerical instability, which was normally solved by interacting with others on the slack channels.

It was hard at times being the first going through the material. However with patience, with continuously reviewing the material, it reinforced what was being taught. You had to be methodical and test each assertion you were making about your code. Sometimes it required taking the algorithm and implementing in a repeatable test case inside a Jupyter Notebook. I found visualising the data improved understanding and helped to identify if anything was erroneous.

You could spend ages looking at the code and not see any obvious mistake. Without visualising the output, an easy mistake such as an incorrect sign in a rotation matrix, was not easy to observe.

The most valuable tool for when you got stuck was slack and your fellow students. These fellow students were online at all hours of the day, from across the globe.

After a few projects, I soon found an approach, that worked for me. It boiled down to learning, writing some code, seeing what happened, fixing what was broken, validating my learning and repeating until I had a project that met requirements.

Getting stuck, sometimes meant having a break, or having a late night. If I was really into tuning, it often meant the late night. Tweaking and trying different settings to get the Neural Network or Algorithm to achieve what you needed, was addictive. It was so much better, then reading or watching a video. The impact of changing your code was visible, in most projects in the simulator.

Your code didn’t produce a report, it produced observable action! It was like when I was a kid programming graphics for the first time.

So if your the type that likes to write those black boxes that other programmers use, you’ll excel at this Nano Degree. If your the type that consumes black boxes, that others have written, you may need to change your outlook.

Advanced Lane Detection

In this Advanced Lane Detection project, we apply computer vision techniques to augment video output with a detected road lane, road radius curvature and road centre offset. The video was supplied by Udacity and captured using the middle camera.

sample lane detection result
sample lane detection result

The goals / steps of this project are the following:

  • Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.
  • Apply a distortion correction to raw images.
  • Use color transforms, gradients, etc., to create a thresholded binary image.
  • Apply a perspective transform to rectify binary image (“birds-eye view”).
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to center.
  • Warp the detected lane boundaries back onto the original image.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

A jupyter/iPython data science notebook was used and can be found on github Full Project RepoAdvanced Lane Finding Project Notebook (Note the interactive ipywidgets are not functional on github). The project is written in python and utilises numpy and OpenCV.

Camera Calibration

Every camera has some distortion factor in its lens. The known approach to correct for that in (x,y,z) space is apply coefficients to undistort the image. To calculate this a camera calibration process is required.

It involves reading a set of warped chessboard images, converting them into grey scale images before using cv2.findChessboardCorners() to identify the corners as imgpoints.
9x6 Chessboard Corners Detected

If corners are detected then they are collected as image points imgpoints along with a set of object points objpoints; with an assumption made that the chessboard is fixed on the (x,y) plane at z=0 (object points will hence be the same for each calibration image).

In the function camera_calibrate I pass the collected objpoints, imgpoints and a test image for the camera image dimensions. It in turn uses cv2.calibrateCamera() to calculate the distortion coefficients before the test image is undistorted with cv2.undistort() giving the following result.
Original and Undistorted image

Pipeline (Test images)

After camera calibration a set of functions have been created to work on test images before later being used in a video pipeline.

Distortion corrected image

The undistort_image takes an image and defaults the mtx and dist variables from the previous camera calibration before returning the undistorted image.
test image distorted and undistorted

Threshold binary images

A threshold binary image, as the name infers, contains a representation of the original image but in binary 0,1 as opposed to a BGR (Blue, Green, Red) colour spectrum. The threshold part means that say the Red colour channel( with a range of 0-255) was between a threshold value range of 170-255, that it would be set to 1.

A sample output follows.
Sample Threshold Image

Initial experimentation occurred in a separate notebook before being refactored back into the project notebook in the combined_threshold function. It has a number of default thresholds for sobel gradient x&y, sobel magnitude, sober direction, Saturation (from HLS), Red (from RGB) and Y (luminance from YUV) plus a threshold type parameter (daytime-normal, daytime-bright, daytime-shadow, daytime-filter-pavement).

Whilst the daytime-normal threshold worked great for the majority of images there were situations where it didn’t e.g. pavement colour changes in bright light and shadow.

Daytime Normal with noise bright light & pavement change
Daytime Normal with noise bright light & pavement change
Daytime Normal with shadow
Daytime Normal with shadow

Other samples Daytime Bright, Daytime Shadow and Daytime Filter Pavement.

Perspective transform – birds eye view

To be able to detect the road lines, the undistorted image is warped. The function calc_warp_points takes an image’s height & width and then calculates the src and dst array of points. perspective_transforms takes them and returns two matrixes M and Minv for perspective_warp and perpective_unwarp functions respectively. The following image, shows an undistorted image, with the src points drawn with the corresponding warped image (the goal here was straight lines) Distorted with bird's eye view

Lane-line pixel identification and polynomial fit

Once we have a birds eye view with a combined threshold we are in a position to identify lines and a polynomial to draw a line (or to search for points in a binary image).

topdown warped binary image
topdown warped binary image

A histogram is created via lane_histogram from the bottom third of the topdown warped binary image. Within lane_peaks, scipy.signal is used to identify left and right peaks. If just one peak then the max bin either side of centre is returned.

calc_lane_windows uses these peaks along with a binary image to initialise a left and right instance of a WindowBox class. find_lane_window then controls the WindowBox search up the image to return an array of WindowBoxes that should contain the lane line. calc_fit_from_boxes returns a polynomial or None if nothing found.

poly_fitx function takes a fity where
fity = np.linspace(0, height-1, height) and a polynomial to calculate an array of x values.

The search result is plotted on the bottom left of the below image with each box in green. To test line searching by polynomial, I then use the left & right WindowBox search polynomials as input to calc_lr_fit_from_polys. The bottom right graphic has the new polynomial line draw with a blue search window (relates to polynomial used for the search from WindBoxes) that was used overlapping with a green window for the new.

Warped box seek and new polynomial fit
Warped box seek and new polynomial fit

Radius of curvature calculation and vehicle from centre offset

In road design, curvature is important and its normally measured by its radius length. For a straight line road, that value can be quite high.

In this project our images are in pixel space and need to be converted into meters. The images are of US roads and I measured from this image the distance between lines (413 pix) and the height of dashes (275 px). Lane width in the US is ~ 3.7 meters and dashed lines 3 metres. Thus xm_per_pix = 3.7/413 and ym_per_pix = 3./275 were used in calc_curvature. The function converted the polynomial from pixel space into a polynomial in meters.

To calculate the offset from centre, I first determined where on the x plane, both the left lx and right rx lines crossed the image near the driver. I then calculated the xcentre of the image as the width/2. The offset was calculated such (rx - xcenter) - (xcenter - lx) before being multiple by xm_per_pix.

Final pipeline

I decided to take a more python class based approach once I progressed through this project. Inside the classes, I called the functions mentioned previously. The classes created were:

  • Lane contains image processing, final calculations for view drawing and reference to left and right RoadLines. It also handled searching for initial lines, recalculations and reprocessing a line that was not sane;
  • RoadLine contains a history of Lines and associated curvature and plotting calculations using weighted means; and
  • Line contains detailed about the line and helper functions

Processing is triggered by setting the Lane.image variable. Convenient property methods Lane.warped, Lane.warped_decorated, lane.result and lane.result_decorated return processed images. It made it very easy to debug output using interactive ipywidgets (which don’t work on github)

Sample result images

lane.result_decorated
lane.result_decorated
Lane.warped_decorated
Lane.warped_decorated

Pipeline (Video)

Using moviepy to process the project video was simple. I also decorated the result with a frame count. The Project Video Lane mp4 on GitHub, contains the result (YouTube Copy)

Discussion

Problems/Issues faced

To some degree, I got distracted with trying to solve the issues I found in my algorithm with the challenge videos. This highlighted, that I need to improve my understanding of colour spaces, sobel and threshold combinations.

I included a basic algorithm to remove pavement colours from the images using a centre, left and right focal point. I noticed that the dust colour on the vehicle seemed to be also in the road side foliage. This however wasn’t sufficient to remove all pavement colour and didn’t work when there was a road type transition. It was very CPU intensive.

In the end, I used a combination of different methods, that used a basic noise filter on warped binary images to determine, if it was sufficient to look for a line or not. If it wasn’t it tried the next one, with the final being a vertical rectangle window crawl down the image. Where the best filter was determined for each box. Again this was CPU intensive, but worked.

Another issue faced was using the previous curvature radius to determine if this line was sane or not. The values were too jittery and when driving on a straight line, high. I decided not to pursue this.

Opportunities for improvement in the algorithm/pipeline

There is room here for some refactoring into a more Object oriented approach. This was not evident at the start of the project as to how it should be structured. I experimented a little with using Pool from multiprocessing to parallelise left and right lane searches. It didn’t make it into my final classes as for normal line searching using a polynomial, as I did not ascertain if the multiprocessing overhead, outweighed the parallelism value. Certainly potential here to use a more functional approach to give the best runtime options for parallelisation.

Other areas, include automatically detecting the src points for warp, handling bounce in the road and understanding surface height (above road) of the camera and its impact.

I thought also as I’ve kept history, I could extend the warp to include a bird’e eye representation of the car on the road and directly behind it. I did mean averaging on results for smoothing drawn lines, but this was not included in the new line calculations from the next image frames.

The algorithm could also be made to make predictions about the line when there is gaps. This would be easier with continuous lines then dashed.

Hypothetical pipeline failure cases

Pavement fixes and/or combined with other surfaces that create vertical lines near existing road lines.

It would also fail if there was a road crossing or a need to cross lanes or to exit the freeway.

Rain and snow would also have an impact and I’m not sure about night time.

Tail gating a car or a car on a tighter curve would potentially interrupt the visible camera and hence line detection.

Clone Driving Behaviour

Clone driving behaviour using Deep Learning

With this behaviour cloning project, we give steering & throttle instruction to a vehicle in a simulator based on receiving a centre camera image and telemetry data. The steering angle data is a prediction for a neural network model trained against data saved from track runs I performed.
simulator screen sot

The training of the neural net model, is achieved with driving behaviour data captured, in training mode, within the simulator itself. Additional preprocessing occurs as part of batch generation of data for the neural net training.

Model Architecture

I decided to as closely as possible use the Nvidia’s End to End Learning for Self-Driving Cars model. I diverged by passing cropped camera images as RGB, and not YUV, with adjusting brightness and by using the steering angle as is. I experimented with using 1/r (inverse turning radius) as input but found the values were too small (I also did not know the steering ratio and wheel base of the vehicle in the simulator).

Additional experimentation occurred with using comma.ai, Steering angle prediction model but the number of parameters was higher then the nvidia model and it worked off of full sized camera images. As training time was significantly higher, and initial iterations created an interesting off road driving experience in the simulator, I discontinued these endeavours.

The model represented here is my implementation of the nvidia model mentioned previously. It is coded in python using keras (with tensor flow) in model.py and returned from the build_nvidia_model method. The complete project is on github here Udacity Behaviour Cloning Project

Input

The input is 66x200xC with C = 3 RGB color channels.

Architecture

Layer 0: Normalisation to range -1, 1 (1./127.5 -1)

Layer 1: Convolution with strides=(2,2), valid padding, kernel 5×5 and output shape 31x98x24, with elu activation and dropout

Layer 2: Convolution with strides=(2,2), valid padding, kernel 5×5 and output shape 14x47x36, with elu activation and dropout

Layer 3: Convolution with strides=(2,2), valid padding, kernel 5×5 and output shape 5x22x48, with elu activation and dropout

Layer 4: Convolution with strides=(1,1), valid padding, kernel 3×3 and output shape 3x20x64, with elu activation and dropout

Layer 5: Convolution with strides=(1,1), valid padding, kernel 3×3 and output shape 1x18x64, with elu activation and dropout

flatten 1152 output

Layer 6: Fully Connected with 100 outputs and dropout

Layer 7: Fully Connected with 50 outputs and dropout

Layer 8: Fully Connected with 10 outputs and dropout

dropout was set aggressively on each layer at .25 to avoid overtraining

Output

Layer Fully Connected with 1 output value for the steering angle.

Visualisation

Keras output plot (not the nicest visuals)

Data preprocessing and Augmentation

The simulator captures data into a csv log file which references left, centre and right captured images within a sub directory. Telemetry data for steering, throttle, brake and speed is also contained in the log. Only steering was used in this project.

My initial investigation and analysis was performed in a Jupyter Notebook here.

Before being fed into the model, the images are cropped to 66×200 starting at height 60 with width centered – A sample video of a run cropped.

Cropped left, centre and right camera image
Cropped left, centre and right camera image

As seen in the following histogram a significant proportion of the data is for driving straight and its lopsided to left turns (being a negative steering angle is left) when using data generated following my conservative driving laps.
Steering Angle Histogram

The log file was preprocessed to remove contiguous rows with a history of >5 records, with a 0.0 steering angle. This was the only preprocessing done outside of the batch generators used in training (random rows are augmented/jittered for each batch at model training time).

A left, centre or right camera was selected randomly for each row, with .25 angle ( for left and – for right) applied to the steering.

Jittering was applied per Vivek Yadav’s post to augment data. Images were randomly transformed in the x range by 100 pixels and in the y range by 10 pixels with 0.4 per xpixel adjusted against the steering angle. Brightness via a HSV (V channel) transform (.25 a random number in range 0 to 1) was also performed.
jittered image

During batch generation, to compensate for the left turning, 50% of images were flipped (including reversing steering angle) if the absolute steering angle was > .1.

Finally images are cropped per above before being batched.

Model Training

Data was captured from the simulator. I drove conservatively around the track three times paying particular attention to the sharp right turn. I found connecting a PS3 controller allowed finer control then using the keyboard. At least once I waited till the last moment before taking the turn. This seems to have stopped the car ending up in the lake. Its also helped to overcome a symptom of the bias in the training data towards left turns. To further offset this risk, I validated the training using a test set I’d captured from the second track, which is a lot more windy.

Training sample captured of left, centre and right cameras cropped

Center camera has the steering angle and 1/r values displayed.

Validation sample captured of left, centre and right cameras cropped

Center camera has the steering angle and 1/r values displayed.

The Adam Optimizer was used with a mean squared error loss. A number of hyper-parameters were passed on the command line. The command I used looks such for a batch size of 500, 10 epochs (dropped out early if loss wasn’t improving), dropout at .25 with a training size of 50000 randomly augmented features with adjusted labels and 2000 random features & labels used for validation

python model.py --batch_size=500 --training_log_path=./data --validation_log_path=./datat2 --epochs 10 \
--training_size 50000 --validation_size 2000 --dropout .25

Model Testing

To meet requirements, and hence pass the assignment, the vehicle has to drive around the first track staying on the road and not going up on the curb.

The model trained (which is saved), is used again in testing. The simulator feeds you the centre camera image, along with steering and throttle telemetry. In response you have to return the new steering angle and throttle values. I hard coded the throttle to .35. The image was cropped, the same as for training, then fed into the model for prediction giving the steering angle.


steering_angle = float(model.predict(transformed_image_array, batch_size=1))
throttle = 0.35

Successful run track 1

Successful run track 1

Successful run track 2

Successful run track 2

note: the trained model I used for the track 1 run, is different to the one used to run the simulator in track 2. I found that the data I originally used to train a model to run both tracks, would occasionally meander on track 1 quite wildly. Thus used training data to make it more conservative to meet requirements for the projects.

My first lane detection algo

I’m all for practical learning by building things. There’s nothing like getting stuck into a project and seeing results. Whilst a little progress is a good motivator, it also shows you how much it is that you don’t know.

I was pleased with the results of my first project doing the Udacity Self Driving Car Engineer Nanodegree. Yet what was more pleasing was being shown how much experimentation is really required. That is that is so much to learn.

This first module was about understanding some of the principles of computer vision that apply. We first started with Canny Edge Detection

and then Hough transform to detect lines within a region of interest.

The first project was to apply this learning, first to set of static images and then to a couple of videos captured whilst driving on a highway.

How cool is the final result .

I submitted my Jupyter Notebook for review. To pass you need to ensure that you meet specification. I passed here is the review feedback.

Some of my reflection thoughts on what can be improved in a future iteration of the project:

  • look for the road horizon starting from bottom centre of the image working up – asphalt has a fairly unique colour
  • break ROI into left and right lanes earlier – seems that at least with driving on a highway without lane changing that we can assume with confidence where they should start at the base of the image
  • segment each ROI into chained vertical blocks of a smaller width
  • when drawing connect intersection of cv2.fitLine lines
  • increase number of segments if lanes are curving left or right
  • label the lines with colour and type – continuous, dashed etc
  • feed the previous result into the evaluation of the next image
  • determine when an image has no lanes that could be considered reasonable
  • lane changing and entering a lane from a curb needs more though
  • if using smaller more specific left and right lane ROIs should allow for following a vehicle
  • not sure how rain affects this – might have to do a test and capture video in a tropical down pour this storm season
  • this approach wouldn’t work in snow. would require a different approach

Its still early yet in the nano degree but I’m hooked already. Happy coding and driving.

Switching off from Aussie innovation for the time being …

The slow dawn of reality has crept into my thinking, that what I’m presently witnessing, is the rise of politically correct innovation within Australia. That is, that there is a rush on, to be positioned, to secure funding and “innovation wash” existing service offerings, ready for when government programs come into affect.

My high hopes for an ideas boom have been dashed somewhat of late. Not so much from the intent, but from the reality, that the intent does not match reality. There is significant education (dare I say re-education) required.

Let me show you what I mean.

If we look at the innovation website Business.gov.au. (their definition here ), it basically suggests, that innovation is about change. It follows:

What is innovation?

Innovation generally refers to changing or creating more effective processes, products and ideas, and can increase the likelihood of a business succeeding. Businesses that innovate create more efficient work processes and have better productivity and performance.

Now if we look at the wikipedia innovation article  it suggests, The term “innovation” can be defined as something original and more effective and, as a consequence, new, that “breaks into” the market or society.

Innovation is a new idea, or more-effective device or process.[1] Innovation can be viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.[2] This is accomplished through more-effective products, processes, services, technologies, or business models that are readily available to markets, governments and society.

So from my perspective the first one, is inwards looking, about the term innovation (as in thats an innovative idea) to do business change or continuous improvement.

I’ve often argued that changing a business process to make it more effective is not innovation. However if that idea, is bought to market, as a new product or services offering, then it is innovation ie there has to be diffusion into a market or society.

Now if we look at the slick new marketing or education material (your viewpoints may differ on this) being produced by National Innovation & Science Agenda Australian Government it refers to how Australians have been good at Ideas, but now we need to get better at commercialising –  turning those ideas, into new products or services.

As you can see, there is a long road ahead, with a lot of jargon presently such as “Ideas Boom”. It will take some time for people, to agree on what things mean (even though they have great definitions available now) and to reach consensus. Then decisions will need to be made about how much capital is to be made available and under what investment thesis it will be allocated.

There are a lot of people shouting about a number of things surrounding these topics and if your not shouting the politically correct message too then no matter how novel and disruptive your idea or invention is, it may not benefit from the “Ideas Boom”. If this is effecting you, jump on a plane and go to Silicon Valley or elsewhere that may be appropriate.

I keep hearing about the lack of opportunity here in Australia, in many fields on podcasts I listen to occasionally. On those podcasts, when people ask eminent panel members about their thoughts on the subject, invariably, their answer will be that we do hope you stay and help drive the next generation. It always surprises me, assuming that these persons are in tenured positioned, how devoid their responses seem to be, from the reality of needing money (or some may say capital) and support to do so. Its this later bit, that’ll take so long to grow here in Australia. It may also require a generational change. The notion of not taking risks in some is the antithesis of what is required in an “ideas boom” era.

So I’m thinking of slowly fading away from observing and commenting on all of this, until I need something concrete from it. Presently it all seems to be a nice discussion, but discussion is after all discussion and not tangible outcomes.

 

Deflect, defer elsewhere and finally block – councillors answer to innovation

No one likes to be blocked on twitter. My first reaction to being blocked by William Owen-Jones, a local Councillor on the Gold Coast, was that he’s a sore looser and I had won. He’d just thrown in the towel. I was sort of rejoicing. However, there was more to this, I must have really hit a few bad nerves. This happens when cultures and what one values as important are worlds apart. Tweets_with_replies_by_William_Owen-Jones___WOJgoldcoast____Twitter

As can be seen in the photo, attached to this article, William Owen-Jones threw this blocking straight in my face. I never swore or called him bad names and tried very hard not to be rude. I asked questions and responded to tweets directed at me. I’m not really sure that this is acceptable behaviour for a public profile – which can be seen here.

One may also argue, that this blog post, is doing the same thing – throwing it back in his face. However, I’ve been thinking for a while, should I write it or not. Clearly, I’ve decided to do so, because the little online incident shows just how much work is required in the city I live presently, being the Gold Coast Australia, to change attitudes. There is no real tech/entrepreneurial culture here, outside of a small few pockets; whereby those presently classified as leaders have had little, to no real exposure to an innovative tech culture based around startups; nor to large groups of techies & programmer types.

Twitter is the place where you can communicate with persons that you normally wouldn’t. In the case of an elected officials, twitter acts as a conduit through which those persons can engage more readily to find out what constituents needs and wants are. But this works both ways in that constituents can find out the machinations behind the public office. When a public official blocks – it just says, don’t talk to me unless you agree with what I’m going to tell you. For intelligent and inquiring people, thats just so wrong.

I, like so many other techies, in Australia, have been amazed over the last decade or so at the very conservatism in our leaders, at all levels in public and private sectors, regarding technology and innovation. It was the later, that I was really pushing the councillor on. I wanted to know what the local council was doing and how they intended to respond to the recent statements by Australia’s new Prime minister, Malcolm Turnbull that innovation is at the forefront of Australia’s economic agenda – opinion piece can be found here. An excerpt from Malcolm’s speech

“We have to work more agilely, more innovatively, we have to be more nimble in the way we seize the enormous opportunities that are presented to us. We’re not seeking to proof ourselves against the future. We are seeking to embrace it,” Malcolm Turnbull said during ministry announcement speech.

What I found was that I was deflected to things such as a press release to run for office (yea right) Tweets_with_replies_by_William_Owen-Jones___WOJgoldcoast____Twitter

Or I was deferred to other levels of government.

But I wanted to know  “what the local Council policies were?” and “What were they doing?”. Unlike other councils around Australia e.g. Sunshine Coast , there appeared to be no policies. There also seemed to be no understanding of what or how local council, in particular the GCCC, will play a potential role (or like to see it).

It also became clear, that what innovation meant was not well understood by Will – he deferred to some BRW (Business Review Weekly), an Australian publication, definition as “change that adds value”. Thats just continuous improvement. I referred him to Clayton Christensen and disruptive innovation. However that was lost on him ….. no response.

There was no distinction between old school internal enterprise IT and innovation (R&D, entrepreneurship and commercialisation of novel IP) that I could perceive. They seemed to be the same thing??? This really surprised me.

I tried to suggest its time to stop following and to start leading. Clearly by pushing for answers and calling BS on him, I just highlighted his ignorance.

I feel quite strongly, after seeing so many colleagues and associates leave the Gold Coast, for greener pastures, that the GCCC needs to address this question of innovation in a more professional, thought out way. The existing approach is just ticking a few boxes. Old players are protecting turfs (and budgets and reputations).

Its time for some renewal. They can’t just keep blocking it out!!