Let's Visualize the World with AI and GIS
GIS specialist Sanna Jokela from Mapillary's partner Gispo was invited to give a speech on AI and GIS. This is what she found out, exploring the unfamiliar topic of artificial intelligence. The post appeared originally on the Gispo blog.
I took part in an AI and urban design event in Turku, Finland, in February, and was honored with an invitation to talk about how to visualize maps using AI. For a moment, I thought I have nothing to give to the subject. Artificial intelligence sounds scary, futuristic, and technical, and I don’t know anything about it. I found out that I did, actually, and there is lots of discussion going on. But related to urban planning? Not so much.
When talking about AI, the first thing people start to think about is the ethical issues. What happens in the future when we are using new methods in urban planning? AI, automated image detection methods, methods in virtual reality (VR), and improved data mining. Whose algorithms are we using; whose view is better than the others? If we teach AI to create city plans, trying to please everyone and ending up with compromises, is the end result at all what the society would need to go onwards? We are not quite there yet but this kind of urban planning with AI is around the corner.
My original talk was about maps and visualization. Current open tools and open datasets linked to location can provide amazing map visualizations to support knowledge-based urban management and also provide views about the "soul maps" of citizens (depending on how open you are using social media).
I used examples from my current favorite visualization artists in Finland: the results of the SPIN Unit research group and the personal blog of Topi Tjukanov. Both use, for example, QGIS in data visualization. And both examples rely on people making visualizations of data created by people, not machines.
Visualization examples by Topi Tjukanov: "Accessibility fireworks"
In these examples, the visualization relies on the human eye: a human detects the beauty of the used color ramps, classifications, and symbols. What if one could decide the best suitable visualization using AI?
Could AI provide us ready-made maps?
My colleague in Gispo, Salla Multimäki, gave me a hint about a fun art AI solution called DeepDreamGenerator. The site uses open code by Google called DeepDream, where one image provides reference for the style and another image how the objects are placed.
What if we provide the algorithm with some aerial or satellite images and a guide map for reference? Of course, we had to test! Pretty awesome! The map maker inside me got quite exited. Here are more DeepDream tests with satellite images by Salla.
Tests with visualization of satellite images with DeepDreamGenerator by Salla Multimäki, visualization expert at Gispo
What if we could teach AI what constitutes a "beautiful map"? Would it automatically start providing us with basic maps, background maps, or guide maps without endless and sometimes tedious work with vector data and classification and visualizations? To achieve this, the algorithm should be optimized to do just that.
We still need to teach the machines and that can only be achieved by telling the algorithm what is right and what is wrong. Mapillary is doing just this with improving their image recognition algorithm. They can now detect structures from images like traffic signs, buildings, and vegetation. But still more work is needed in different environments. The work relies on human verifications and thus teaching the algorithm further to detect correctly.
I have myself done aerial image detection (waaay back) and then the reflections and varying pixel values with different images caused quite big problems to detect patterns automatically. Uniting different data sources, image detection methods, and AI, we can probably define structures for urban planning more easily.
From an urban planner viewpoint, the issue is interesting. We have a project with general planners where we plan a database model for them to use in the future. This has led us to start thinking more and more about AI abilities. We asked the general planners how they, for example, define what is a zoning element on the general plan? How do they separate a certain area from another? The answer was that the zoning elements are logical entities that are of suitable size for zoning and regionally coherent. In short, "We just know."
Also, the zoning elements are linked to each other so that if a certain element is not approved, say, by the administrative law, it affects the surrounding zoning elements or the whole plan. How can we teach machines this local knowledge (that we "just know") and make them see the connections between different spatial elements? AI should also take into account the political views and history of the area (do we have all historical plans digitalized?). It is very interesting and I think AI solutions in urban planning will come, but it requires lots of teaching, testing, and new ways of thinking.
If you have any ideas on this, find your local AI enthusiasts and start a discussion. In Finland, we have an AI Society who are activating the discussion. I myself got quite intrigued. Hope to learn more on this!
Some interesting links: