By Andreea Andrei, Marketing and Business Administration Executive at The Cloud Computing, SaaS and Security Awards
This article is part of an A to Z series by Cloud and SaaS Awards, continuing with N for Network Intelligence
Whether we want it or not, we disclose information about ourselves when we use the internet, phones, and “smart” TVs. Whether we use social media or not, we constantly leave traces of our activities on the internet. This knowledge has now become a valuable commercial commodity. It will continue to expand rapidly as more devices join the so-called “Internet of Things” (IoT). Our online movements are a source of information not only for ourselves, but also for companies that provide us with services on the network.
Companies collect data on the internet and store it in massive databases, which have been dubbed “big data.”
- Google handles 3.5 billion queries every day and stores around 10 exabytes of data. It has nearly a million servers.
- Facebook adds 500 terabytes of data per day.
- Amazon stores the purchase data of 152 million consumers in around 1 exabyte. Personal data collectors on the internet are not all the same, nor are their purposes.
Wikipedia provides an excellent definition of an algorithm:
An algorithm (from the Greek and Latin dixit algorithmus, and this in turn from the Persian mathematician Al-Juarismi) is a prescribed set of well-defined, ordered, and finite instructions or rules that allow carrying out an activity through successive steps that do not raise doubts in the minds of those who must carry out that activity. Given a starting state and an input, the following processes result in a final state and a solution.
Algorithms are widely used today on many commercial websites, such as Amazon, to offer products or services to users based on previous actions (which allow them to guess customer preferences) and based on social networks to make offers and, more importantly, to accumulate their own knowledge about users (and eventually sell this).
Search engines, like Google, use incredibly complicated algorithms (with thousands of variables) to arrange the result pages around what may be considered “best” to the interests of individuals that search. The requirement to take into account different conditions affects the processing of vast volumes of data (big data), which leads to the employment of artificial intelligence.
All of the information obtained by the organizations, services, and social networks that we “visit” on the internet is stored in massive databases known as “big data,” where it may be examined to reveal additional information about us.
We have acquired so much data that it is no longer essential for our name to be recorded in the database. We need data, not a personal profile, to be recognized. Some programs, such as online payments, multimedia services, and communication platforms, can infer our identification based on our activity.
What information does big data collect?
When we use the internet, four types of personal data may emerge and be collected:
- Private documents: are those the ones we create and “move” from person to person via direct channels (such as email and chat); they can also include cloud documents (Dropbox, iCloud, etc.) that we do not share;
- Public: these are our web publications: personal websites, blogs, magazine pieces, documents in platforms like ISSUU, Scribd, and so on;
- Social: refer to our actions on social networks (status updates, images and videos, “likes,” and so on);
- “Trails” (data trail): accidental traces left by individual activities (e.g. device data, location, links followed, and so on).
Artificial Intelligence (AI)
Origins and claims of artificial intelligence (AI)
First and foremost, what exactly does “artificial intelligence” imply? The notion arose from a series of well-known seminars conducted in 1956 at Dartmouth College in Hanover, New Hampshire, United States. The experts gathered there explored the idea of creating machines capable of doing true “intelligent” activities rather than programmed computations.
Big data and its analysis are frequently discussed in terms of the methods used to extract information using artificial intelligence. However, it is essential to provide some explanations about it, since “artificial” merely refers to something that does not exist in nature, thus, this notion is easy to describe. However, defining “intelligence” can be challenging because different philosophers and psychologists provide different explanations.
The topic specialist David Chalmers said in 2015 that it was unrealistic to anticipate consensus in the twenty-first century.
John McCarthy, a pioneer in artificial intelligence at Stanford University, addressed the matter in November 2007 as follows:
Isn’t it possible to define intelligence without having to make a connection to human intelligence?
The answer? Currently, no. The issue is that we are yet unable to generalize what kind of computer processes we wish to categorize as intelligent. We are aware of certain intelligence mechanisms but not all.
Nevertheless, Marcus Hutter and Shane Legg have put forth broad definition:
The Intelligence gauges an agent’s capacity to attain objectives in a wide variety of contexts. However, it may be overly general – aren’t there many computer systems that perform these tasks but that no one would describe as intelligent? Thus, it is unlikely that they will win widespread support, particularly among psychologists.
One of the most recent developments in artificial intelligence and machine learning is set by the engineers’ accomplishments at the Massachusetts Institute of Technology. Their AI is capable of imitating a human to conduct a network search and even improving it if the initial result is not satisfactory… although the administrators must step in at the beginning.
The technology was developed to automatically categorize text data, according to project leader Regina Barzilay, by merging all of the data gathered to uncover patterns that correlate to several categories specified by the site administrators themselves.
Expert systems vs. artificial intelligence
Even though speech interfaces, like Siri and Cortana, are referred to as AI systems, in reality they are only expert systems, which relate to the early stages of AI research and are now implanted in devices, like smartphones, as they use less power.
Expert systems include fundamental knowledge that enables them to perform operations in a very specialized area, and they may now “learn” from the usage that is made of them (basically from a statistics of the terms dictations). Nevertheless, they were only made feasible because of earlier work by robust computers with parallel processors that can extract patterns from massive amounts of data. Google has earned credit for this accomplishment, as it has been gathering in its machines the information provided by their users, being thus able to carry out the number of hits of their study at 90% accuracy in detecting human speech.
What is the ultimate goal of network intelligence?
It has been discussed throughout this article that there are more methods for personalisation than just adding personal information into huge databases. Apple has demonstrated one such method as well, following Lawrence Lessing’s concept from Code 2.0:
With wireless technology, people may now connect to information services from the Internet in new ways, but what must be protected most is the freedom to experiment with new grid connection methods. A wide range of innovation opportunities for the mobile Internet will be guaranteed by the freedom to connect all sorts of devices to the network and perform tasks that its designers never dreamed of. Are we headed toward a restricted wireless environment where the cable or phone companies’ equivalents control our access to mobile devices? Or will we choose a wireless system design in which it is up to everyone to select which technologies may and cannot be used? I believe the next significant advancement in wireless internet technology will occur if we establish innovation resources
In conclusion, the ultimate goal can only be “improving the intelligence of the species,” given that every human represents a tool to become more intelligent. This forces us to think about technology and its development line before continuing unthinking the impulse of what has already been done. Technologists “don’t know humans,” as Joshua Cooper puts it, and this should be prone to change.