Path Planning

In this Path Planning project, we guide a vehicle to drive down a highway in a simulator. The vehicle must do so, without collision, staying in the right three lanes, without speeding, minimising acceleration & jerk and effectively staying within its current lane, unless overtaking.

lane change manoeuvre
lane change manoeuvre

Valid Trajectories

The program was able to drive for the simulator for over an hour without incident. After that point I terminated the project. Occasionally there are incidents when you turn into a lane and your vehicle may be side swiped or rear ended. This seemed to be an issue with the robotic cars.

The speed limit was set at 50 MPH. Above that causes a violation. In the program, speed was set slight below at 47.5 MPH.

The spline technique supplied in the walkthrough was effective at ensuring max acceleration in m/s/s and Jerk in m/s/s/s are not exceeded.

The car in normal driving does not collide with other vehicles. Whilst in a lane, it tries to match either the speed of the vehicle in front or the average for the lane. Whichever is lowest.

The vehicle stay consistently in the centre of each 4 meter wide lane. Frenet coordinates greatly assisted with this.

A cost function was implemented that had a bias towards staying in the existing lane, matching the average lane speed, avoiding collisions and having a buffer for other vehicles in the proposed lanes. If the best lane found, was still the existing lane with a closely vehicle, then speed was adjusted to match.

Reflection

I spent significant time, before the walkthrough video was released, creating another solution. This used the behaviour planning and Jerk Minimising Trajectory approaches in the lessons. Code can be found on github here. I couldn’t quite get all the components working together. I mainly had issues generating jerk free trajectories in global coordinates from a pipeline of freenet coordinates even after creating waypoint splines. At some point I may review this approach again. I found it somewhat difficult to debug using a simulator. In hindsight if I started it again, more focus would be placed into unit test cases.

The final approach I used to submit the project for review was based on the walkthrough video. I added some lane speed & nearest approach calculations to provide input for a basic cost function as described above.

I really like the spline approach, using coordinates converted into car space and sampling to create a smooth trajectory. The code for this project although not the neatest or elegant, is able to effectively drive the vehicle in the simulator to meet requirements.

Advertisements

Vehicle Detection and Tracking

In this vehicle detection and tracking project, we detect in a video pipeline, potential boxes, via a sliding window, that may contain a vehicle by using a Support Vector Machine Classifier for prediction to create a heat map. The heat map history is then used to filter out false positives before identification of vehicles by drawing a bounding box around it.

Vehicle Detection Sample
Vehicle Detection Sample

Vehicle Detection Project

The goals / steps of this project are the following:

  • Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier
  • Optionally, you can also apply a color transform and append binned color features, as well as histograms of color, to your HOG feature vector.
  • Note: for those first two steps don’t forget to normalize your features and randomize a selection for training and testing.
  • Implement a sliding-window technique and use your trained classifier to search for vehicles in images.
  • Run your pipeline on a video stream (start with the test_video.mp4 and later implement on full project_video.mp4) and create a heat map of recurring detections frame by frame to reject outliers and follow detected vehicles.
  • Estimate a bounding box for vehicles detected.

A jupyter/iPython data science notebook was used and can be found on github Full Project RepoVehicle Detection Project Notebook (Note the interactive ipywidgets are not functional on github). As the notebook got rather large I extracted some code into python files utils.py (functions to extract, loading helpers), features.py (feature extraction and classes), images.py (image and window slice processing), search.py (holds search parameters class), boxes.py (windowing and box classes) and detection.py (main VehicleDetection class that coordinates processing of images). The project is written in python and utilises numpy, OpenCV, scikit learn and MoviePy.

Histogram of Oriented Gradients (HOG)

Through a bit of trial and error I found a set of HOG parameters.

HOG Feature Extraction and Parameters

A function extract_hog_features was created that took an array of 64x64x3 images and returned a set of features. These are extracted in parallel and it in turn uses HogImageFeatures class.

As the hog algorithm is primarily focused on grey images, I initially used the YCrCB colour space with the Y channel (used to represent a gray images). However I found that it was not selective enough during the detection phase. I thus used all 3 colour channels. To reduce the number of features, I increased the number of HOG pixels per cell. I used an interactive feature in my notebook to find an orient setting of 32 that showed distinctive features of vehicle. Sample follows.

Training Vehicle HOG Sample
Training Vehicle HOG Sample

The final parameter settings used color_space = 'YCrCb',orient = 32,pix_per_cell = 16 and hog_channel = 'ALL'. Experimentation occurred with using Colour Histogram Features but it slowed down feature extraction and later increased the number of false positives detected. Per the following visualisation graphic, you can see that the Cr and Cb colour spaces had detectable hog features

Sample HOG Channel Output form a video window slice
Sample HOG Channel Output form a video window slice

Classifier Training

Once HOG features (no Colour Hist or Bin Spatial) were extracted from car (GTI Vehicle Image Database and Udacity Extras) and not_car (GTI, KITTI) image sets. They were then stacked and converted to float in the vehicle detection notebook.

Features were then scaled using the Sklearn RobustScaler sample result follows.
RobustScaler Feature Sample

Experimentation occurred in the Classifier Experimentation Notebook between LinearSVC (Support Vector Machine Classifier), RandomForest and ExtraTrees classifiers. LinearSVC was chosen as the prediction time was 0.00228 seconds for 10 labels compared to ~0.10 seconds for the other two.

Sliding Window Search

Building sliding windows

For this project four sizes of windows were chosen – 32×32, 48×48, 64×64 and 128×128 and position at different depth perspective on the bottom right side of the image to cover the road. The larger windows closer to the driver and the smaller closer to the horizon. Overlap in both x,y was set between 0.5 and 0.8 to balance the need for better coverage vs number of boxes generated – currently 937. The more boxes for a sliding window, the more calculations per video image.
Window Search Example

Classifier examples and optimisation

Some time was spent on parallelisation of the search using Python async methods and asyncio.gather in the VehicleDetection class. The search extracts the bounded box image of each sized search window and scales it to 64×64 before doing feature extraction and prediction on each window.
Small Window Slice Scaled to 64x64

The search hot_box_search returns an array of hot boxes that classifier has predicted contains a vehicle.

These boxes overlap and are used to create a clipped at 255, two dimensional heat map. To remove initial false positives counts > 4 are kept. The heat map is then normalised before another threshold is applied

heatmap = apply_threshold(heatmap, 4)
heatmap_std = heatmap.std(ddof=1)
if heatmap_std != 0.0:
    heatmap = (heatmap-heatmap.mean())/heatmap_std
heatmap = apply_threshold(heatmap, np.max([heatmap.std(), 1]))    

Plotting this stage back onto the image
detected boxes and heatmap

A history is kept of heat maps Heatmap History which is then used as input into Scipy Label with a dim binary structure linking dimensions, giving
Heatmap with corresponding 2 cars identified labels
finally a variance filter is applied on each box, if for one detected label boxes are ignored with a variance < 0.1 (its just a few close points0 or if multiple with a variance < 1.5 (more noise).

Video Implementation

Vehicle Detection Video

The Project VehicleDetection mp4 on GitHub, contains the result (YouTube Copy)

Result Video embedded from YouTube

Tracking Vehicle Detections

One of the nice features of the scipy.ndimage.measurements.label function is that it can process 3d arrays giving labels in x,y,z spaces. Thus when using the array of heat map history as input, it labels connections in x,y,z. If a returned label box is not represented in at least 3 (heat map history max – 2) z planes then it is rejected as a false positive. The result is that a vehicle is tracked over the heat map history kept.

Discussion

When construction this pipeline, I spent some time working on parallelising the window search. What I found is that there is most likely little overall performance improvement to be gained by doing so. Images have to be processed in series and whilst generating the video, my cpu was under utilised.

In hindsight I should of used a heavy weight search to detect vehicles and then a more lighter weight, narrower search primed by the last known positions. Heavy weight searching could be run at larger intervals or when a vehicle detection is lost.

My pipeline would fail presently if vehicles were on the left hand side or centre of the car. I suspect trucks, motorbikes, cyclists and pedestrians would not be detected (as they are not in the training data).

The rise of chat bots and the fall of apps

It once was cool to build a mobile app and many startups in the past had built successful businesses with them and a minimalist web site. Now the chances of a new mobile app, creating mindshare and enabling a spot on a person’s home screen is next to impossible. We’ve reached peak app and new style of apps called chat bots are taking mindshare.

App stores are flooded, the majority of apps are rarely downloaded or found for that matter, as they do not rank.

It’s now harder than ever for a developer to build an app, that will replace the staple set of apps, a user does have on their devices. The frontier has changed to chat apps that have a conversational style interface either using text or voice (think siri). If you are building a new mobile app, stop! and reconsider how you are going to reach your target audience.

These new chat apps are leveraging existing instant messaging apps and agents on websites. Increasingly also APIs are being created and exposed to allow developers to interact with well known personal assistants like Siri. Some may argue that the interaction between human and computer is frustrating. I’d agree, having occasional back and forth sessions with Siri, to dial on my iPhone, a person I call regularly. However the situation is slowing improving as machine learning/AI technology improves behind the scenes.

Many will argue that we are not seeing anything new, that it is just the same technology and approaches that have been around for ages. The quest, as such to pass the Turing test where a judge can not determine if he or she, is talking to a machine or a person.

I think we’ve reached an inflection point, where a new class of conversation chat bot is being enabled by the gradual and constant exponential evolution of computing technology, sharing of open source component technology (such as natural language processing) in conjunction with the ongoing to quest to provide individually tailored answers to people’s own question through understanding the explosion of data available online.

This is also backed up by a dramatic increase in tech news coverage regarding startups in the US and with training/conferences covering this area.

So forget building a mobile app and start building a chat bot!

 

 

Switching off from Aussie innovation for the time being …

The slow dawn of reality has crept into my thinking, that what I’m presently witnessing, is the rise of politically correct innovation within Australia. That is, that there is a rush on, to be positioned, to secure funding and “innovation wash” existing service offerings, ready for when government programs come into affect.

My high hopes for an ideas boom have been dashed somewhat of late. Not so much from the intent, but from the reality, that the intent does not match reality. There is significant education (dare I say re-education) required.

Let me show you what I mean.

If we look at the innovation website Business.gov.au. (their definition here ), it basically suggests, that innovation is about change. It follows:

What is innovation?

Innovation generally refers to changing or creating more effective processes, products and ideas, and can increase the likelihood of a business succeeding. Businesses that innovate create more efficient work processes and have better productivity and performance.

Now if we look at the wikipedia innovation article  it suggests, The term “innovation” can be defined as something original and more effective and, as a consequence, new, that “breaks into” the market or society.

Innovation is a new idea, or more-effective device or process.[1] Innovation can be viewed as the application of better solutions that meet new requirements, unarticulated needs, or existing market needs.[2] This is accomplished through more-effective products, processes, services, technologies, or business models that are readily available to markets, governments and society.

So from my perspective the first one, is inwards looking, about the term innovation (as in thats an innovative idea) to do business change or continuous improvement.

I’ve often argued that changing a business process to make it more effective is not innovation. However if that idea, is bought to market, as a new product or services offering, then it is innovation ie there has to be diffusion into a market or society.

Now if we look at the slick new marketing or education material (your viewpoints may differ on this) being produced by National Innovation & Science Agenda Australian Government it refers to how Australians have been good at Ideas, but now we need to get better at commercialising –  turning those ideas, into new products or services.

As you can see, there is a long road ahead, with a lot of jargon presently such as “Ideas Boom”. It will take some time for people, to agree on what things mean (even though they have great definitions available now) and to reach consensus. Then decisions will need to be made about how much capital is to be made available and under what investment thesis it will be allocated.

There are a lot of people shouting about a number of things surrounding these topics and if your not shouting the politically correct message too then no matter how novel and disruptive your idea or invention is, it may not benefit from the “Ideas Boom”. If this is effecting you, jump on a plane and go to Silicon Valley or elsewhere that may be appropriate.

I keep hearing about the lack of opportunity here in Australia, in many fields on podcasts I listen to occasionally. On those podcasts, when people ask eminent panel members about their thoughts on the subject, invariably, their answer will be that we do hope you stay and help drive the next generation. It always surprises me, assuming that these persons are in tenured positioned, how devoid their responses seem to be, from the reality of needing money (or some may say capital) and support to do so. Its this later bit, that’ll take so long to grow here in Australia. It may also require a generational change. The notion of not taking risks in some is the antithesis of what is required in an “ideas boom” era.

So I’m thinking of slowly fading away from observing and commenting on all of this, until I need something concrete from it. Presently it all seems to be a nice discussion, but discussion is after all discussion and not tangible outcomes.

 

Nick’s tips for Silicon Valley

For Silicon Valley you need to find meet ups/events. That’s where people network.NASA cap smiling oct 2015

During the day a lot of persons are working on site in the large campuses of the tech companies. Traffic is fairly heavy on the main routes in the early morning with people still heading into work after 9 AM.

The wider Bay Area is really massive. Expect some decent travel times and I’d suggest on the first trip getting a hire car. I’ve used the CalTrain to get from San Francisco out to Mountain View. The people at Enterprise Mountain View are really friendly. If its your first time driving a left hand drive, the roads are also a little less busy than say San Francisco. So you can practice on side streets. Remember driver in the middle!

San Jose is a city that many, in Australia, would not of heard of. It has the highest per capita earnings of any city. Fuelled of course by tech. Theres not much to see there but its worth at least a visit.

Mountain View. I’d say is the heart of Silicon Valley. It has many of the well known tech companies head quarters.

Places to have a look at or around Mountain View:
– Red Rock Coffee Mountain View (peeps use it as a place to work)
– Hacker Dojo http://www.hackerdojo.com – they have tours on Friday nights etc you need to work out whats on via meetup
– Apple HQ, 1 Infinity Drive Cupertino
– Drive around Stanford Uni (really hard to get a park before 4 PM)
– NASA Ames Research Centre – small visitor centre with a shop
– Moffatt Field http://www.moffettfieldmuseum.org (get a few selfies in cockpits)
– Google HQ – driving near there might see a Google Self Driving Car. Theres a shop/visitor centre but think its only open Tue-Thu
– visit the Computer History Museum (well worth the time)
– Facebook HQ (nothing to see besides the campus and the FB like sign out the front)– Intel HQ has a good museum
– Yahoo, eBay etc all have office spaces but are pretty boring

A little bit off the beaten path  is finding Steve Jobs Garage and Hewlett Package Garages.IMG_8432

Nick’s tips for San Francisco

Ok .. be prepared for a mix in San Francisco. Your going to have homeless people all over the place with rich affluent tech people alongside. Its a big contrast. A big issue in the city is gentrification. Having said that, there are lots of really cool places but they might be a little hard to find. IMG_8149

Try foursquare app or trip advisor. The apps really do work here. They are your friend.

For the touristy type stuff, I head straight down to http://www.pier39.com/ and look at the sea lions. Then I jump on a one hour harbour tour. They all go by alcatrez and under the Golden Gate. This way I really know I’m in San Francisco. The other clue is the waft of dope floating down Market st. Apparently its legal to smoke for medical purposes in California, just your not supposed to do it in the open.

Its hard to get onto an alcatrez tour at this time of year (October as I write this), if you haven’t booked in advance. However some people like Alcatrez, others don’t. I’ve yet to get on the rock but it still fascinates me when I go by it, on a harbour tour.IMG_8160

Pier 39 is next door to Fishermans Wharf – restaurants etc. Try https://www.boudinbakery.com/ they are famous for their clam chowder in a bread bowl. I have breakfast at Boudin the first day in San Francisco now and go for the traditional egg and bacon brekkie.

Another good thing to do is to hire a bike and cycle over the Golden Gate bridge – its not hard. Most catch the ferry back from Sausalito (be weary of the time of day as big line ups to get on the ferry). If fogged in maybe choose another day or wait for it to clear. Plenty of bike hire places around $7 per hour.

If you want to hop on the Cable Car, I’d avoid doing it at power/market st turn around near IMG_8279union square. Lines are always long. Try doing it next door to Fishermans Wharf at http://www.ghirardellisq.com/. One of the first stops is the top of  Lombard St. Could walk down it and then catch the cable car again 🙂

San Francisco is next highest densest populated city after New York. I think the population itself is about ~750K. The broader bay area is massive – highway 101 is the main route to Silicon Valley. Theres heaps of meet ups and events on during the week. Its also a bit cheaper to stay then San Francisco. However not much to do around Silicon Valley (unless you want to work) – its really suburbia with massive campuses.

To find events use meetup,com or eventbrite. More then likely you’ll find an event that interests you like close by (SF and wider Bay Area). Distances outside of San Francisco are deceptive, so you really need a car or use say the caltrain/BART and uber.

ohh I  nearly forgot about the maker movement in SF http://www.techshop.ws/tssf.html – see if you can get a tour.

Theres also startup house nearby setup by australians. has bunk beds for $25 per night .. think i’m a bit over that lol but many people think its great.

The above is a slightly modified version of an email, I sent to a friend, explaining about what to do in San Francisco. If you have some other tips or if your a regular traveller to San Francisco, I’d love to hear about what rituals you now have.

Safe travels and happy exploring.

What are the startup pitches like in San Francisco?

There is a seed funding bubble in the Bay Area they keep telling me. I wanted to get a glimpse of what these startups are pitching. Plus learn a little about what questions investors ask of them.

IMG_6105

An opportunity presented itself with my March 2015 US trip to attend a Startup Pith Day & Mixer event with VCs, Angels and Entrepreneurs. It was organised by foundersspace.com and hosted at Morrison & Foerster LLP on Market St, San Francisco. After networking a little, I decided to grab a seat. Glad I did, as it was standing room only really. The venue was at capacity.

The format of the event was a five minute intro by the founder about their startup, followed by a Q&A session with investors. I’ve forgotten the exact number that pitched but there were maybe a dozen or more.

Before that started, each investor in the room, was invited to give a brief overview of themselves or their fund with 1 or 2 questions from the audience. I was surprised that a number of mainland Chinese VC funds were in the room. There were a few entrepreneurs of course, asking leading questions to see if they could entice them to invest now in their ventures. It was all done with good humour.

Each five minute intro started with a teaser about the business concept or vision that led into the person stating his/her name and company. Further explanation of the concept was then given. No slides or visuals were used. It was just the entrepreneur standing up and talking. A facilitator then invited questions from the investors panel.

This was the part that interested me, “what would these investors ask?”. I had gathered from their introductions, that there was a mixture of Angels and VCs on the panel. With some of the VCs, now favouring smaller funds.

The startups were on the whole consumer focused and largely mobile apps. It appeared that the majority were from outside the US. Some of them were looking to take their initial success in home markets and use that as the basis to expand into the US.

So the investors, were mainly asking questions about how the entrepreneurs understood US market fit with their offerings. It was that simple. I suppose with the limited time they had, they couldn’t ask much more. I don’t think I heard any questions about their team and ability to execute. Nor, what it was, that the entrepreneurs were looking for, from an investment perspective to see if there was a match with the investor’s investment thesis. In hindsight, I suppose these startups, were coming out of an incubator/accelerator, so there would be a range, that they could guess.

These guys and gals had obviously been coached well. Except in a few cases, they confidently answered the investors questions. Sometimes, they restated the investors questions, such that it fitted their answers they had prepared.

I did ask myself, would I invest myself in any of these startups? Or would I want to find out more information about the startup? One would assume, if you approached them there would of been an opportunity statement that they would of given out.

It was an eye opening event for me as nearly everyone that presented had market traction (somewhere around the globe). But from my perspective, I had trouble valuing what that market traction was worth!