Written 2nd July 2018
Recently I attended the 2nd day of CognitionX 2018, the AI, blockchain and autonomy conference. I love technology and I particularly enjoy when it improves people’s lives in some way. I came away both enthused about how far we’ve come and worried about how far we have to go to make these technologies a reality.
- These technologies are a lot more developed than even I thought they were (let alone the average UK citizen), and it was great to see the advancements and just “out there” ideas people are having and applying them to.
- AI isn’t really Artificial Intelligence at the moment, it’s simulated Intelligence — an approximation of intelligent decisions based on learned behaviour, usually via machine learning.
- A point raised many times over the day was the need for an inclusive data set for training that behaviour — for example if you train the AI to recruit for new engineers based on the current profile of most companies you’d get a lot of white males in the 25–35 age range — so we need to train for aspiration as well as reality.
- Smart cities, there has been slow to no impact on people’s lives so far. Do the public really want a smart city or just a better one? What’s the difference?
- There was a lot of talk about London’s smart city plan (launched the previous day), but no-one mentioned Nesta Flying High Challenge. There is a risk that once again many teams are off working on the same thing without talking to each other (see my 3rd takeaway).
- If autonomous cars become prevalent, why would you own a car and not just use the airline model of rent a seat in one when you need it? Costs would come down dramatically, not to mention the environmental impact of less vehicles being produced and the increased city-space that would be reclaimed by the decreased volume of them on the roads.
- Is there a free flow of information between organisations? I saw a couple of sessions where I thought “if you just spoke to that company I saw on the other stage, you could solve your problem”. If not, how do we foster that?
With my drones hat on some of the interesting topics were around mapping and automation.
- Mapability have a great solution for crowdsourcing mapping data. This is great but what happens when you need data for an area they don’t have? They’ve already thought of that and are creating a marketplace for you to task someone / something to acquire it.
- There are major advances in autonomous vehicles that are nearing readiness for adoption e.g. Oxbotica learning to self position (infrastructure free vision based localisation) without the need for GPS (or other global satellite systems) using only a cheap camera, all the processing is done in software. Their software can remove shadows from images so time of day / season doesn’t matter, and removing or simulating weather conditions to allow for better training and recognition.
- However, what is the dataset needed? How many times was the route driven? What’s the scalability of the solution?
- HRobotics have produced a drone you don’t have to be trained to fly due to its level of automation, and is robust enough to deal with most hot, hostile and austere environments. It truly is just another tool in your company’s toolbox. I’m really excited to see how they address the safety case to fly it in the UK given our PfCO rules.
- Animal Dynamics taking inspiration from nature to solve some of the common problems of drones regarding flight times, and payload delivery. Drones based on the glide characteristics of birds, and the flight of a dragonfly were just two.
- Not to mention Sabine Heurt who is analysing flocks of birds and applying that to drone swarms.
Other topics of interest were:
- Safety and accountability in machine intelligence — fostering trust in learned systems is key to public acceptance. In order for this to happen the decisions they make need to be open, legible & understandable to the users of the service. Not to mention the beneficiary of this technology needs to be identified so the users know who will benefit (or be harmed) by the service e.g. Facebook and the Cambridge Analytica scandal.
- Comparing Humans and Machines, the human brain is vastly more capable at computations than a computer but has a much lower storage density. As machine designers are not constrained by rules or the template of the human body, what improvements can they make to its form / function e.g. Boston Dynamics robots.
So it seems like we’ve made huge advances in machine learning, but it’s still got a long way to go before those systems will be true AIs, and critically that the public will accept them. Smart cities are starting to be designed but unless we plan towards them as a integral part of our future they wont deliver on the benefits the public have been promised (are those promises even what the public actually want?). Autonomous vehicles need a compelling use case in order for all the work needed to get over the barriers / regulation are worth the benefits.