By. Dr. Tatiana Malherbe, Dr. James Carswell
The future of the web relies on connected objects (iThings) that will allow us to interact fully with our close environment. The ever-growing number of sensors will send so much data over the network that it will need to be sorted to be of any real interest. The development of the Three Dimensional Query algorithm by me already offers one way of shaping the future for mobile users.
We all know the Web 2.0, or social web, based on information shared between users. Facebook, Twitter, and so on: even if we don’t use them, we know them! Nowadays, more and more data around us are available on the network and need to be shaped in order to be used via Web 3.0 and Web 4.0 applications.
Web 4.0, the future of iThings
A. You probably wonder what this new Web 4.0 is all about. Some people call it the “intelligent web” where the network itself takes initiative. This is possible thanks to the development of the Internet of Things (iThings) that can collect and send data through the cloud. All everyday connected objects using an IP address are part of iThings, like a scale, a fridge or a washing machine.
According to ABI Research, more than 30 billion devices will be wirelessly connected in 2020 and, thus, will be available to send information about position, movement and environment, such as air and water quality, ambient light, noise, or radiation data, energy consumption, etc. Dr. Carswell did not wait for the complete set up of Web 3.0, the “semantic web”, to work on solving Web 4.0 problems. One of them is the information overload on mobile displays that will continue to increase over the years. “Optimizing all available information to personal needs through intelligent search and display is at the heart of 3DQ,”.
3DQ: Sorting the tons of Web 4.0 data
Nowadays, a 2D query is already used by the general public in various applications like the “what’s around” of Google Maps, Twitter, AroundMe, etc. But, with the evolution of the technology and the increasing number of sensors around us, we will be able to look in a third dimension. Indeed, applications like 3DQ will sharpen our request. Not only will we be aware of what’s happening around us, but we will know precisely where to look, knowing even which floor to go to before entering a building.
For this, 3DQ uses the latest open source technologies to collect data inside a virtual dome that is created all around the mobile user. “Non-spatial attribute information is linked to geo-referenced buildings, points-of-interest (POIs), windows, doors and any sensors, iThings, that may be attached to these objects.” After identification of the physical objects, their typical Internet information is retrieved and their links are presented to the user.
What is the use of 3DQ in our daily lives
Of course, a 2D query could be sufficient to plan a jogging route or cycling trip. But the combination of the browser-friendly web service delivered by 3DQ and the real-time access to many layers of Big Data could allow the elaboration of an ideal route in terms of air pollution, noise and green space conditions.
The future is for 3DQ to function on the mobile device itself, without need for network connections – except for retrieving linked attribute data.” This approach could help in avoiding the network latency in the query process with the additional benefit of reducing the network data transmission cost.
Future work will combine this approach with “semantic based filtering mechanisms of Web 3.0 and map personalization techniques that monitor past user task and map behavior to further adapt maps and information display,” added Dr. Carswell. Now, imagine the power of such an algorithm coupled to new technology, like Google glasses: SciFi movies are becoming real…
Q/A – Dr. Tatiana Malherbe / Dr. James Carswell
1. Q. You talk about tomorrow’s “web 4.0”. Do you think that with the rapid evolution of various technologies, we will go directly from 2.0 to 4.0 without any real passage through 3.0?
A. Not exactly. Web 3.0 is about building a “semantic web” where common data formats are employed to describe all data on the web to make it machine readable and thus enable sharing of diverse data streams across multiple
applications. A good idea that is proving difficult to implement automatically so still an open problem with many
research groups actively investigating. Happily, It is not necessary to wait for Web 3.0 before we can start working
on solving Web 4.0 “problems”.
Q. Is the 2D query already commonly used? In what kind of applications? For the general public or a more specialized one (like militaries, commercial companies…)?
A. Yes, 2D querying is already commonly used and open to the general public, especially the range (proximity) query. It can be seen in many services and applications, for example: Google Maps (what’s around), GeoNames, Twitter, AroundMe, Layar, Wikitude, GeoVector World surfer, etc. with some applications also including directional query capabilities. However, 2D IsoVist querying is not yet commercially available anywhere.
Q.Do you know of other kinds of 3DQ development? If yes, is it the same kind of approach as yours? What is the
biggest difference between them?
A.There are some related approaches in terms of calculating visibility in urban environments. For example the
Spatial Openness Index (SOI) by Fisher-Gewirtzman et al., or the 3D Isovist Voxel by Morello and Ratti. These
approaches measure visibility as a scalar, i.e. more concerned with calculating the volume (m3) of the visibility
space in order to measure the influence of different urban environment (e.g. building) configurations on human
perception. Our approach is based on vector datasets and requires calculating the actual shape of the volume to
act as a 3D query “window” in a spatial database. To do this we need to find and connect all possible line-of-sight
intersection vertices comprising the unique visibility space surrounding an individual’s location.
Q. In your conclusion, you open the door to a mobile solution instead of a server one. Are you going to look further in this direction? If yes, what will be the major changes for you and the users? Would we still talk about “web 4.0” if a mobile solution is chosen?
A. Yes, the future is for 3DQ to function on the mobile device itself, without need for network connections – except
for retrieving linked attribute data. Even on today’s mobile device hw a variety of server functionalities can
already be directly implemented, e.g. 2D query and display functionality on pre-downloaded maps is clearly
possible. Network latency (not query processing time) is the main hold-up to the query process so migrating the
entire query process to mobile devices will definitely help solve this problem – plus the added benefit of decreasing
any network data transmission costs. However, an all mobile solution does not simply shift everything
implemented on the server to the mobile device. The computational power is now mobile but Web 4.0 sensors
(iThings) will still need servers in the cloud to host and serve “Big Data” for many LBS applications.
Q. You talk about trillions of inexpensive micro-sensors in the environment in the future, how many (approximately of course!) are already present nowadays?
A. No idea! But, consider Dublin City as a “small” example. The city currently manages a live Big Data archive derived from various local and national government departments [http://dublinked.ie/datastore/datastore.php]. This vast and largely underexploited resource currently includes 266+ layers (themes) of heterogeneous data streams – from cultural heritage to citizen participation to environment to transportation, etc. Contextually selected, integrated, and spatially visualised one layer at a time, two at a time, etc., there are practically an infinite number of potential combinations (2266 – 1) of possible new layers of information to create, overlay, recommend, and explore with mobile 3DQ technology. Such “open data initiatives” are becoming more common with some cities, e.g. Toronto, Vancouver, Ottawa, Edmonton [www.toronto.ca/open/], and indeed countries, e.g. U.K. [http://data.gov.uk/] and U.S.A [www.data.gov/catalog/], now compiling and opening data catalogues of heterogeneous spatial and no spatial datasets to citizens for royalty-free use world-wide.
Q. Could you tell me if I have understood, or correct me if I am wrong: Today, there are already a lot of iThings that can be used for various 2D queries but in the future, there will be many, many more. To have more accurate
information about what’s around you, you are developing a 3D query using all the signals of these trillions of
iThings. You developed and tested 3 algorithms on the DIT campus and added some sensors on your facades. Just
to be sure that I have understood correctly, in “reality”, your algorithm will use preexisting iThings? No additional
sensors will be needed?
A.Correct. Most datasets are inherently spatial – containing locational information as well as attribute information –
and there is already more data available today – coming from imbedded traffic sensors, weather sensors, marine
sensors, pollution sensors, cellular and social media tracking, etc. – than we can handle in any one application, i.e.
information overload. To help address this problem, our 3DQ application acts as a mobile sensor web data mining
tool for intelligently refining the search space to reduce information overload and increase task-relevant data
retrieval of geospatial and sensor web datasets already available today with a view to easily include Future
Internet data streams as they come online.
Q. The Dome: Could you explain the “hole” (in the picture) in the front of your dome? Does it correspond to your
back? With the big building, it looks like you can’t see past it, am I right? If yes, do you think that it could be a
limitation (to have access to what happens on the street behind it or even inside the building) or do you think that
people won’t care, otherwise they would have been on the other side of the buiding…?
A.The dome is created for a 360o swath around the user’s current location out to a user specified radius. S/he could
be facing any direction and will get the same dome shape calculated. It all depends on what spatial objects (e.g.
buildings) interrupt their field-of-view in all directions from where they are standing – and as they cannot see
through buildings, objects beyond what they can actually see do not get returned by the query. In terms of
retrieving information on what lies in the space beyond what they can see, the results of a dome query could be
subtracted from a simple range query to retrieve only those objects that are just out of sight. For now, we do not
search inside of buildings However, 3DQ can distinguish between different floors or exterior windows/doors of a
building and retrieve information related to these individual objects, if it’s available in the database.
Q. Do you use the same sensors as Google maps for your 3DQ? Does any developer use his own kind of sensors?
A. The type of sensors used is not important, only their location. In fact, the “sensors” we used in our paper were
simulated – i.e. they don’t actually exist except in the spatial database. The purpose of placing these sensors
on/around the buildings (stored as 3D models in the database) was to test the accuracy of the query algorithms
developed. Some sensors were intentionally placed in hidden nooks and crevices of the building (model) to make
for a fair test when calculating the dome shape used to search for secreted sensor locations.
Q.If your algorithm works with both today’s sensor web environments and tomorrow’s Web 4.0, what will the
passage to Web 4.0 bring?
A. The passage to Web 4.0 will bring even more data streams online than are currently available for query. Thus,
information overload on mobile displays is already a problem that will only continue to increase in the years to
come. Optimizing all available information to personal needs through intelligent search and display is at the heart
of 3DQ. Combining our approach with semantic based filtering mechanisms designed to exploit ontologies (Web
3.0) and map personalisation techniques that monitor past user task/map behaviour will also be looked at in our
future work to increase query relevance and recommend next levels of detail to further personalise
Q. The same algorithm will be available in one app to bring data about what’s happening behind the door in front of you and to know the air quality when you are jogging in the park? Or will it be used by different companies: for
example the University for the organization and the city for the air quality?
A. Yes, the same 3DQ search algorithms can be used for all types of queries and applications.
Q. I understand why 3D is important for big buildings area but for bikers or joggers and the air quality, 2DQ is
sufficient or I am missing something?
A. Air quality sensors at ground level, next to traffic, will probably give a completely different reading than those at rooftop level, so 3D queries might be needed to distinguish between the two. However, bikers and joggers are probably more interested in the cleanest, quietest, safest route over a large area instead of around a specific building. So, 2D queries could be sufficient enough to provide the information they need for planning a jogging or cycling trip. In any case, real-time access to layers of Big Data draped over the underlying spatial fabric of the city will facilitate informed, context-aware decisions about such personal/task specific problems. For example, going jogging could overlay air pollution data with noise data with green space data to recommend the ideal route.