Why ‘Internet of Things’ will fuel next tech revolution
AI and 'deep learning' have opened up a new 'ethical frontier'
As part of the University of Bradford's '13 Academics' series of features, this month we talk to Professor Dhaval Thakker about how advances in artificial intelligence and the 'internet of things' will change our lives.
The world is about to get a whole lot smarter. Smartphones may now be ubiquitous but 5G and ‘the internet of things’ will usher in a new age of ‘smart’ objects. Think smart cups, smart bins, smart fridges, smart newspapers, cupboards, windows, chairs, hats, clothes and practically anything else as we tumble headlong into a world that blends the achingly mundane with the truly fantastic.
“The internet of things has been around for 20 or 30 years,” muses Dhaval Thakker, Associate Professor in Computer Science in the Faculty of Engineering & Informatics at the University of Bradford and director for two of its Computer Science MSc courses: MSc course on the Internet of Things (IoT) and MSc Big Data Science and Technology. “It was previously called ‘machine to machine (M2M)’ but now there’s so much more we can do with it.”
He is talking about how the next iteration of mobile connectivity (essentially the jump from 4G to 5G) will be nothing short of transformative, changing not only the way we communicate with one another but dragging previously technologically inert ‘things’ (fridges, cars, cameras, even buildings and entire cities) into the ‘5.0’ world, where both they and we benefit from a series of connections, sometimes to each other and increasingly to parallel networks.
Cricket in 40°
It’s a far cry from the world he grew up in. Born in India, he lived in a small town in Gujarat, where he remembers “playing cricket in 30-40 degree heat”. He was good in studies and had the grades to go to medical school (his parents’ preference) but rebelled and chose to follow his passion for AI. He came to the UK in 2003 to do a Master's in Communication Systems at Brunel University London. He won a scholarship to do a PhD in computer science by Nottingham Trent University in 2008 for his work on 'An intelligent framework for dynamic web services composition in the semantic web'. Prior to joining Bradford, he worked as a Senior Research Fellow at the University of Leeds from 2011 to 2015 and was leading semantic web/AI related research in several EU projects. He has lived in Leeds with his wife and two children since 2011.
Reflecting on his youth, he says: “Computers were a rare thing. My father had one and it was new and expensive. I was always fascinated by it but never had much access to it. As I grew older, IT became a big thing. I had the grades to go to medical school but I didn’t want to be a doctor.”
In the end, he followed his heart and is now part of the very technological revolution which inspired him as a child and which has never been as powerful.
He explains: “5G connectivity is going to be at least 10 times faster than 4G, meaning much lower latency – below 10 milliseconds. This will solve many of the issues where latency has been a challenge. It also opens up new avenues in terms of applications. For example, it can allow the feeling of being in the same room as someone else, so two people at different locations could sing a duet. This will have a massive impact on other applications where this sort of low latency is paramount, such as complex surgeries performed remotely.
“In terms of impact on Internet of Things, low latency and the ubiquitous nature of 5G means it will support smart vehicles and transport infrastructure, such as connected cars, trucks, and buses, where a split-second delay could be catastrophic, together with other objects.”
Just what these everyday objects will do in order to be elevated to ‘smart’ status is, as yet, unclear. Smart fridges might automatically reorder food, smart bins will know when they are full, smart newspapers may save the Press from oblivion. What is clear, however, is that such change is coming. Indeed, some may argue it’s already arrived.
Some public bins in the centre of Bradford are already fitted with QR codes so people can report issues. An interconnected web of sensors is being used to monitor rain gauges, reservoir and river water levels in Bradford as part of local flood defence planning. The EU-funded SCORE (Smart Cities + Open Data Re-use) project, a partnership between nine cities, including Bradford, is liaising with the Born in Bradford community health project, Breathe Better Bradford and the Open Data Institute Leeds to use an IoT approach to monitoring air quality, while at the same time getting school children involved in data collection.
In addition, a project to create a Virtual Bradford – an online replica of the city centre – is already underway, with 100k of the city mapped into ‘brick for brick’ 3D.
5G and the Internet of Things will enable artificial intelligence (AI) to move into worlds we never thought possible. For example, use of IoT for monitoring flooding, and picking up early signs of flooding using AI techniques on the real-time data from IoT sensors.
AI will also have an impact on traditional industries too. “Take the legal profession,” says Dhaval. “It’s a very traditional area and one not used to making use of a lot of technology. A lot of menial work is still done manually. We are trying to develop an AI system which will converse with clients, do a lot of leg work in terms of checks, finding out similarities with other cases, which can then be checked by a lawyer.
“We have just been funded by an Innovate UK (KTP) grant (£175,000) for an ambitious project to investigate and build a Legal Immigration Artificial IntelLigence Advice (LILA) expert system. Development of LILA is focused on building an AI-based expert system for supporting decision making in UK immigration law. This expert system will be developed using state-of-the-art AI techniques to capture tacit knowledge of legal text and past cases, to support efficient decision making.
“Some of the mundane (but vital) tasks such as initial due diligence, or suitability of an application for a certain category of immigration, including initial advice to an applicant, would be one of the first targets for semi-to-full automation.”
Here’s where things get a little more complicated because as AI extends its reach into our lives, previously unlooked for ethical considerations need addressing.
Dhaval says: “Machine learning or ‘deep learning’ is used to solve complex problems often involving imaging. For example, building predictive algorithms to recognize tumours more accurately and faster than human doctors or creating autonomous vehicles using image recognition to detect road signs, traffic signals and pedestrians.
“This brings questions around the privacy and unintended use of data. Cameras can capture people on the street, faces of citizens can be captured on many of the reporting applications (our own mobile app for Flood reporting had the same sort of ethical dilemmas). What are the implications for a free society if we are under constant public surveillance? How does that change expectations of privacy? What happens if that data is hacked?”
Questions which will no doubt be answered in time.
Linked to the evolution of ‘smart everything’ is something called ‘knowledge graphs’. These are, essentially, a different way of storing data. Instead of information being listed on a database, they are layered into graphs, all of which makes it easier to search for connections. Google already uses knowledge graphs in its search engine. That’s why, when you search for ‘Apple’, you are presented with an array of information about that company, including its founder, CEO and so on.
In turn, knowledge graphs have given rise to the concept of ‘Explainable AI’. If knowledge graphs represent the ‘What?’, then Explainable AI represents the ‘How?’ and the ‘Why?’. It allows human users to understand, appropriately trust, and effectively manage AI.
Dhaval explains: “‘Explainable AI’ is where we use knowledge graphs to explain decisions made by machines. So, if a doctor sees a certain classification of an image by a machine learning algorithm as containing tumour, we can then use the knowledge graphs to find out why and how the algorithm arrived at that decision.”
Knowing the why and how could lead to quicker diagnosis and more effective treatments. It could also help implement safeguards to prevent misuse of AI.
“Deep learning is a reality in today’s world but in terms of its application and the ethical issues which come with that, we haven’t even scratched the surface.”