Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Friday, 21 August 2020

So I heard you can do Computer Vision at 30FPS; I can do 1000.

- Akash James



And there was a man, in a cave, held captive and hooked up to an electromagnet plunged deep in his chest. Hammering his way through, quite literally, Stark, built his initial Arc Reactor and Mark 1 Iron Man suit, using nothing but a bucket of scrap and modern, tactical, self-guiding, explosive payload-carrying arrows, ergo missiles. Over-did it, didn’t I? Mesmerizing to most, the primitive propulsion system for un-guided flight and rudimentary weapons were not striking to engineers like us.

Stark kept going on, adding new capabilities to his armour, reaching peak performance with the Model Prime and finally calling it a day with the Mark 85. (More like Captain Marvel blasted him in Civil War 2 or the Gauntlet irradiated him, based on the cinematic or comic universe you prefer).

Just like arguably the best science-fiction-based inventor, I never stop with my creations and continue over-hauling for higher performance, ’cause I know that there will always be a higher ascension level to reach.

Computer Vision is a field with rapid progress; new techniques and higher accuracy coming out from various developers across the planet. Machines now have human-like perception capabilities, thanks to Deep Learning; with the ability to not only understand and derive information from digital image media but also create images from scratch with nothing but 0’s and 1's.

How did it begin?

Time and again, the higher tech-deities bring me at a point in this space-time continuum where I am faced with a conundrum. My team and I, back in our final year of college, were building a smart wearable for people with impaired vision, an AI-enabled extension of sorts to help the user with recognizing objects, recognizing people, and performing Optical Character Recognition; we called it Oculus. In all honesty, we did not rip it off from Facebook’s, Oculus Rift VR Headset and it was purely coincidental. The AI Engine was comprised of a multitude of classifiers, object detectors and image captioning neural networks running with TensorFlow and Python. With my simpleton knowledge of writing optimized code, everything was stacked sequentially, not allowing us to derive results in real-time, which was an absolute necessity of our wearable. Merely by running the entire stack on the GPU and using concurrent processes, I was able to achieve 30fps and derive real-time results.

Thus, this began my journey of being fast — real fast.

Ratcheting my way through

Fast forward two years to the present, I currently work as an AI Architect at Integration Wizards. My work predominantly revolves around creating a digital manifestation of the architecture I come up with for our flagship product — IRIS

Wondering what exactly IRIS does? (being Deadpool and breaking the 4th wall) To give you a gist, IRIS is a Computer Vision platform which provides our customers with the ability to quickly deploy solutions that monitor and detect violations. People counting and tracking with demographics, adherence to safety gear usage, person utilization, detection of fire, automatic number plate recognition and document text extraction are some of the features that come out-of-the-box. 

Typically, IRIS plugs into existing CCTV networks, rendering previously non-smart recording networks into real-time analytical entities. IRIS uses Deep Learning for it’s AI Engine but the architecture of the pipeline and the neural networks has seen many changes. My first notable architecture involved web technologies, like Flask and Gunicorn, to create APIs, that my worker threads could utilize. This ensured that the GPU was utilized in a better manner. However, this turned out to be moot when a large number of streams were to be processed.

The two primary hindrances were the API based architecture being a bottleneck under higher loads and the Object detection neural networks being heavy. For this, I needed something better, a better queue and processing architecture along with faster neural nets. Googling and surfing Reddit for a couple of days, I came across Apache Kafka, a publisher-subscriber message queue that is used for high data traffic. We retro-fit the architecture to push several thousand images per second from the CCTVs to the neural networks to achieve our analytical information. We devised another object detection model that was anchor-less and ran faster while retaining performance. Of course, the benchmark was against the infamous COCO dataset.
This increased our processing capability close to 200 fps on a single GPU.

The Turning point

Yes, you guessed it, I didn’t stop there. I knew that there was much more fire-power I could get; accessible but hidden in the trenches of Tensor cores and C++ (such a spoiler). The deities were calling me and my urge to find something better kept me burning the midnight fuel. And then, the pandemic happened.


WHO declared COVID-19 a global emergency — it ravaged through multiple countries and fear was being pushed down people’s throats; most offices transitioned into an indefinite work-from-home status and India imposed the world’s largest lockdown. Wearing masks and Social distancing was the new norm and everybody feared another Spanish flu of the 1900s. 

As an organization, we work with AI to be an extension of man, helping the human race to be better. Usage of face masks and social distancing needed enforcement and what better way to do it than with AI? Our stars aligned, the goals matched and we knew what we needed to build. The solution had to be light-weight and fast enough to run on low-end hardware or run on large HPC machines to analyze hundreds of CCTV cameras at once. For this, we needed an efficient pipeline and highly optimized models.

Hitting 1000 with Mask Detection and Social Distancing Enforcement

By now, I had a few tricks up my sleeve. IRIS’ pipeline now harnesses elements of GStreamer, which is an open-source, highly optimized, image/video media processing tool. TensorRT is something we used to speed up our neural networks on NVIDIA’s GPUs to properly utilize every ounce of performance we could push out. The entire pipeline is written with C++ with CUDA enabled code to parallelize operations. Finally, light-weight models — the person detector uses a smaller ResNet-like backbone and our Face Detector is just 999 kilobytes in size with a 95% result on the WiderFace dataset. Our person detector and Face Detector are INT8 and FP16 quantized making them much faster. With quantization and entire processing pipeline running on the GPU, amalgamating these together, IRIS’ new and shiny COVID-19 Enforcer ran at 1000 fps at peak performance for Social Distancing and 800fps for both Social Distancing and Mask Detection.

This allows us to deploy IRIS on smaller embedded devices to provide a cost-effective solution for retail-chains and stand-alone stores while letting us utilize multi-GPU setups to run on warehouses, shopping malls and city-wide CCTV networks making it easier to comply with and deny the spread of infection.

So what’s next?

I am not done. Achieving one milestone allows me to mark a bigger and better goal. Artificial Intelligence is in its infancy and being at the forefront of making it commercially viable and available in all markets, especially India has been mine and my organization’s vision. The endgame is to have AI for all, where people, be it developers or business-owners, have the ability to quickly design and deploy their own pipelines. 

IRIS aims at being a platform to precisely empower individuals with that, with the intention to democratize Artificial Intelligence, making it not a luxury for the few, rather a commodity for all. 

Chiselling AI agents to be the best tool that man has ever known will be our goal, paving the future with a legion of Intelligent agents, not making the world cold, but making us a smarter race. Ain’t nobody creating Ultron!

Thursday, 2 July 2020

Enhancing Workplace Health & Safety Using Computer Vision

Subhash Sharma 


Although health and safety at workplaces have improved over the years, yet the UK continues to have a large number of workplace accidents. The number of accidents resulting in injury, or in some cases, even death is quite high. Many of these accidents can be avoided, and AI-based computer vision can play a significant role in cutting down these accidents. 
Health and Safety Statistics. Key figures for Great Britain (2018/19)
  • 1.4 million working people suffering from a work-related illness
  • 2,526 mesothelioma deaths due to past asbestos exposures (2017)
  • 147 workers killed at work
  • 581,000 working people sustaining an injury at work according to the Labour Force Survey
  • 69,208 injuries to employees reported under RIDDOR
  • 28.2 million working days lost due to work-related illness and workplace injury
  • £15 billion estimated costs of injuries and ill health from current working conditions (2017/18)

( Source: https://www.hse.gov.uk/statistics/ )
Forklifts alone account for 1,300 UK employees being hospitalised each year (That’s 5 UK workers each workday!) with serious injuries due to these accidents. Unfortunately, that number is rising as there is a significant growth in e-commerce and warehouses across the UK. 

Use of AI-based computer vision for optimising workplace health and safety in the UK.
Our Computer Vision product, IRIS, is an AI-based computer vision solution to track and predict workplace accidents and then prevent them from happening. The existing use cases include:
  • Fork Lift Safety, Predicting and Prevention of Fork Lift accidents.
  • Use of Lifting Equipment
  • Work at Height
  • Fire & Thermal injuries and accidents
  • Machine Guarding
  • Manual Handling
  • Monitoring near misses and reporting near misses & accidents in real-time
  • Monitoring Use of PPE

IRIS is an enterprise computer vision solution. The product is currently deployed at many customer sites including Fortune 500 companies. The AI solution sits on the top of existing CCTV infrastructure. It is very cost-effective and can be deployed quickly either at a customer site or through the cloud. 

Monday, 22 June 2020

Covid-19 isn’t going away soon…… What’s your plan to make YOUR team safe in the workplace?

Thankfully Covid-19 seems to be on the wane…. But it will take a long time to disappear fully, and it is unlikely to be the last pandemic. This means that YOU may need to make the workplace safe for your team and perhaps your customers.

The good news is that many organisations have successfully implemented a Work-From-Home strategy….. And perhaps it has been much easier than we had dreamed it could be – certainly full or part-time WFH will now be a real option for many office workers.

The bad news is that solving the problem for workers who MUST be in their workplace is much, much harder.

Thankfully there ARE tools and solutions (automated Artificial Intelligence) that can help.

Solving the problem falls into several areas

  • Supplying / enforcing use of PPE
  • Providing hygiene solutions – hand sanitiser and washing facilities
  • Making social distancing easy & ensuring that it happens

The last of the above – social distancing, is easy to request, but much harder to implement & has a whole set of sub-problems

  • Filtering out people who have symptoms e.g. high temperature, that they themselves may be unaware of
  • Re-arranging the environment to provide enough isolation where individuals work
  • Reducing hotspots where people might struggle to keep a suitable distance
  • Changing processes to reduce face-face interactions where possible
  • Encouraging a culture where people choose to do the right thing

For these problems we have a technology solution: IRIS, an Artificial Intelligence tool that automatically and constantly analyses what is going on in your workplace AND gives you the data to work out what is going wrong, when it is going wrong and gives you the opportunity to fix it.

  • It can check who is & is not using PPE
  • It can check temperature of people entering a building
  • It can measure how far apart people are 24/7
  • It can find hotspots where social distancing guidelines being broken
  • Feedback is immediate and specific

Crucially this allows YOU to take control

  • Change processes exactly where it creates problems
  • Rearrange workspace to minimise squeeze points & hotspots
  • Make changes AND then check if these changes were effective
  • As with many processes you often get what you measure ….especially when the feedback loop is effective

Perhaps as importantly it allows you to SHOW that you are taking control, hopefully giving confidence to people, demonstrates your organisation’s desire for change and ideally creating a model for the right culture. 

Here is how you can find out more

Visit www.iwizardsolutions.com/covid19

Ask us at info.eu@iwizardsolutions.com

Monday, 11 May 2020

Competing in the Age of AI

-Apoorva Verma

How machine intelligence is changing the rules of business and what should companies do to stay on top? 


Pic: Freepik

Artificial intelligence (AI) is a branch of computer science that makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks.

According to IDC, companies are forecast to spend $98 billion on AI, globally in 2023. This stems from the fact that more and more businesses continue to invest in projects that utilise the capabilities of AI software and platforms. For example, most companies have turned to chatbots or automated customer service agents for their customer services.

In the current scenario, the world is grappling with a global pandemic, the COVID-19. This has forced most of the countries into lockdown and changed the way businesses function. The role of AI has now become more important than ever.

We are in a phase where AI is realising its potential in achieving human-like capabilities, so isn’t it time to question the business leaders on how they can harness the strength of man and machine.

With technologies such as deep learning, IoT, computer vision and language processing, machines have learnt how to speak, read, text, identify patterns, and so much more. As this field precipitates more into commonly manual activities, for example, the use of AI to combat the corona pandemic in many countries such as training the AI to recognise a positive case using the Chest X-Ray or using drone cameras for thermal screenings.

THE COMPETITIVE ADVANTAGE:

Rather than scrapping the traditional methods of competitive advantage, AI reframes them in such a manner that companies can get a dynamic view of their strengths. For example, the health and safety of company employees were traditionally dependent on manual hours of patrolling security, it then moved to long hours of video feed monitoring from multiple cameras. However, humans are prone to error due to fatigue or negligence.

So, how can this be reimagined by AI?
  • Data: AI can harness data at a much faster rate and directly from users. 
  • Automation: Algorithms learn from data and experience. This allows us to train them for any security breaches as well as to explore new opportunities that may not be possible manually.
  • Decision Making: AI increases the rate and quality of decision making as the number of inputs and the speed of processing for machines can be millions of times higher than for humans. 
Thus, AI can make lives safer and help employers gain insights on those areas that may have been opaque to them before. Our computer vision solution, IRIS AI is certainly changing lives and supporting business restart their operations.

Furthermore, predictive analytics and objective data are free from human gut feeling and experience. Many industries such as manufacturing, warehousing, retail, banking, automobile and many more have all moved sharply towards adapting computer vision technology. For example, in retail, AI can generate insights from online as well as physical stores (if connected using computer vision).

CONCLUSION:

In this AI-enabled world, it is almost imperative for companies to embrace AI to achieve a competitive edge. Companies need to identify what machines can do better than humans and vice versa, and then develop complementary roles and responsibilities for each, and redesign processes accordingly.

Tuesday, 7 April 2020

IRIS AI – A fight against the COVID-19 Pandemic

pic: Unsplash 

How do we stop a global pandemic which has infected over 3.6 million and claimed more than 250000 lives? 

As the number of COVID-19 cases continues to rise, wearing masks and gloves, frequent hand sanitization, social distancing, and early identification of infected people are crucial in curbing further spread of COVID-19.

However, the biggest challenge that is faced by organizations is to enforce and ensure strict compliance with these measures. Especially at a time when the number of cases is on a surge each day, non-compliance due to human negligence and fatigue can cost us greatly.

To put it in words, “we are our worst enemies” during the COVID-19 pandemic.

This is where technology such as computer vision can be highly impactful. IRIS AI, a flagship product by Integration Wizards Solutions can transform your passive CCTV cameras into active analytical tools.

The computer vision technology can keep your employees and premises safe by ensuring the use of masks and gloves by people, social distancing compliance, and early sensing of fever using thermal cameras.

With the majority of countries in a lockdown, business continuity is another key challenge being faced by companies across the globe. However, it is also critical to get the operations re-started to get the economy back up.

So, for those looking for an effective mechanism of ensuring more than 99% compliance without any infrastructure overhaul, implementing IRIS computer vision can help.

IRIS AI uses the feed from existing CCTV to detect non-compliance and raises real-time alerts as SMS, WhatsApp messages, email, etc. The alarm is configured to be sent to the right authority who can then take the necessary steps like personally contacting the non-compliant employee. Moreover, it gives insights on a dashboard for the organisation to understand the analytics during a given time period.

In these challenging times for communities across the globe, technology and innovation could be the key in this fight against COVID-19.

Friday, 3 January 2020

Moving towards an AI-enabled future



McKinsey Global Institute claims that artificial intelligence is contributing to a transformation of society 10 times faster and at 300 times the scale, or roughly 3000 times the impact of the Industrial Revolution.

This is observed by the upsurge in artificial intelligence and computer vision adaptation as well as the demand to keep up with the technology and innovation in businesses in the last few years. According to a report by IDC, aggressive investments have been made in cognitive and AI solutions and in fact, global investments are expected to reach $57.6 billion by 2021.

Such investments are catalysed by the advent of modern computer vision and image processing techniques. AI-powered computer vision coupled with hardware-based accelerators have opened up the possibilities of analysing images in real-time to identify objects and activities. 

Since, a typical CCTV image is more than 100,000 Bytes, in this context, it might be quite apt to surmise that a picture is worth hundred-thousand words! 

At present, there are over 500 million CCTV cameras installed and the number is expected to rise to over a Billion by 2021. While these cameras cover everything from manufacturing, yards, warehouses, retail outlets to several parts of modern cities, so far they have been used retrospectively for monitoring and forensic analysis.

However, there is substantial growth in their usage in various verticals. For instance, retail outlets are getting equipped with the capability of knowing their customer demographics, dwell time and even emotions. Even the government is contemplating their use in smart city initiatives as they could prove beneficial if suspect activities are filtered from the live CCTV footages. Likewise, manufacturing premises bolster their safety parameters by ensuring any hazardous non-compliance is actively analysed and reported. 

In fact, stepping up occupational health & safety for people at all levels is the new benchmark that some of the companies are trying to work towards. If this becomes a norm, it could make a sustainable difference in global OSH challenges and promise a brighter future. 

Thus, an AI-enabled future lies in the best use of distributed vision technologies while delving into the deeper end of machine learning and deep learning to explore and understand the potential of these technologies better.

If you are looking to explore how computer vision technology can be useful in your enterprise, check out IRIS AI by Integration Wizards Solutions.

Tuesday, 5 November 2019

Computer Vision: Enhancing Industrial Safety with AI



by Apoorva Verma

The AI revolution is here.

As artificial intelligence increasingly gains prominence, several sub-domains such as computer vision, machine learning, deep learning, internet of things, and analytics are some of the technologies that have propelled growth.

Out of these, computer vision is one of those technologies that enable the interpretation and understanding of the visual world for machines. With the help of digital images and deep learning models, computers react to what they 'see' by identifying and classifying objects. In fact, accuracy rates in recognising and responding to visual inputs for the technology has risen from 50% to 99% over the past decade. This means that such solutions could become indispensable for a range of applications across industries.

However, our focus of discussion is the use of computer vision technology in manufacturing, which now have the necessary means to achieve automated safety compliance.

A Computer vision solution, such as the IRIS, developed by Integration Wizards, to work with an existing CCTV network, would serve as an advanced and effective replica of the human eyes, with the added ability to identify and classify different objects or situations, and react accordingly, such as in the form of alerts.

For instance, the AI-powered solution ensures workforce safety compliance by identifying workers without prerequisite safety equipment or protective gear such as hardhats, visibility vests, etc. This results in an appropriate response, like sending a real-time notification to the safety manager. The solution also maintains a database of safety protocol breaches which would be useful in investigations of any workplace accidents as well as take a step towards preventing accidents.

The application of the solution further extends from safety gear detection to occurrence of serious incidents such as the detection of fire and electrical malfunction, machine malfunction, trespassing or unauthorised access to hazardous areas, etc.

Efficient response trigged from the system in such scenarios aids in preventing serious losses to the workers as well as the manufacturing plant. Thus, it detects any anomalies that are not in accordance with the standard operating procedures. Real-time alerts and fail-safe measures accelerate the resolution of the issue.

The application of the solution can further encompass operational safety compliance. This would include material safety, such as the multi-object detection through computer vision, with automatic scanners on production lines, etc. In addition, it would identify any faults with raw materials that may be too small for the human eye but could prove detrimental for the final product.

In high performing manufacturing plants, compliance with safety regulations becomes the utmost priority. In fact, components falling off the production line must also adhere to safety guidelines.

Ultimately, the pressure of delivering high quality, efficient and time-sensitive results at manufacturing premises, together with the use of heavy machinery, potentially dangerous equipment, and the possibility of human error, make such sites prone to oversight of safety compliance, and by extension, workplace accidents.

Such unique innovative solutions can ensure safety compliance across the workforce as well as the entire manufacturing process and facility.

What it takes to be a Deep Learning Engineer

By Akash James


The race for AI will dwarf any other race relative to the mystic realm of technology”.

No, I’m not quoting anyone, just saying what I often tell myself.

Most people just see technology as a creature comfort, but throw on my shoes and put on my spectacles and you’ll see art that just lures you in to be an artist. The wheel was the best invention in my opinion and has stayed that way since 3500 B.C. Fast forward over 5 millennia later and we still use this humble yet irreplaceable invention. But hey, why not create a new contender to the wheel (My narcissism is preceding me right now!), something that interweaves into human existence; like a cybernetic triple helical DNA structure where the third strand would be of Extended Intelligence. Yes, I didn’t say Artificial Intelligence, rather Extended where our capabilities have been enhanced with our own creation. Intelli-ception, maybe?

When I began my engineering, I had a plethora of technologies to amalgamate my consciousness in. I’ve had my fair share of experience with android apps, robotics and the Internet of Things, but just as I was walking through this Odin’s Vault of technology, I stumbled upon the Infinity Gauntlet of Artificial Intelligence, Deep Learning. With my eyes immobilized on it, I went ahead to wield the gauntlet and snap something awesome into existence once I had all my infinity stones. Of course, the infinity stones is just an analogy to things like Neural Networks, Algorithms, Math and so on. Boy, oh boy, getting the infinity stones is no joke.

After completing engineering with a bunch of projects that had deep learning coursing through its CUDA cores, I joined Integration Wizards Solutions. Again, with an Azure Hackathon as a stepping stone, I was bestowed with the opportunity to flex my fingers with the gauntlet and weave solutions laced with deep learning. This is where I exploited Object Detectors to arguably detect a variety of object instances that would verify compliance, MTCNNs to recognize people and keypoint detection for Pose Estimation. This product is what we call, IRIS.

Initially, it began with training models and getting our algorithms to work in a controlled environment; Proof-of-Concept, what my folks at work and a lot of you call it. But then the production level stuff began. At times it felt like you’re in a cave and you need to create a miniaturized arc reactor in a fortnight. Train models, code the business logic, design functionality, unit test, optimize, refactor and scale for load are some of the steps in chronological order. Being a Deep Learning engineer requires a lot of ingenuity and rationing of your time.

C'mon, I need 21 minutes every day to watch my favourite anime. Where else will I derive the power of will made of steel that enables me to not give up?

Given the nature of trial-and-error when training models, it takes a lot of clever decisions (what we call hacks) with respect to dataset augmentation and hyper-parameter tuning to trick neural nets to do what we want them to do. Sorcery it is! Scaling is where all the roadblocks begin. One challenge we faced was creating a pipeline that could accommodate 200-odd cameras for real-time object detection inference.

There was a need for speed and accuracy was a priority. The resultant was a neural network that was very demanding. We countered this with 6 NVidia RTX 2070s, a Flask Server powered by Gunicorn, TensorFlow and a pinch of awesomeness. We used TensorRT to run an optimized frozen INT8 graph at 100+ fps.

When deploying this, you don't want to accidentally create Ultron that goes rogue and raises false alarms (No strings attached is a bad thing, trust me). With a tad bit of Computer Vision techniques into the mix, we were able to solve the false alarms. Another project required us to combine tracking and detection together for Intrusion Detection. Detection was GPU intensive and tracking was CPU extensive, a balance was needed to share the load and run in the most optimal manner.

This experience led me to believe that mastering the art of Deep Learning involves mastering other elements of technology. Be it writing APIs that serve inference, multithreaded code for increased throughput or networking to handle a multitude of input sources.

Every day there is a call for code, a new mountain to conquer, a new challenge to accomplish. With new infinity stones I collect, it brings me one step closer to completing the masterpiece I envision, a contender that'll give the humble wheel a run (rather, a roll) for its money, all built on the shoulders of Extended Intelligence. *snap*

Thursday, 8 March 2018

Existential crisis in the age of artificial intelligence

I am facing an existential crisis of sorts, a storyteller, a fictional author, a compulsive liar who is passionate about technology. What am I going to do with my life that adds meaning to it? Will AI at some point of time have the same doubts about itself? Will it look for a purpose of life?
I am trying to understand technology through art. But why art? Because we all have certain algorithms through which we make sense of things. Mine has always been heavily based on metaphors and the ability to draw parallels between two different things. Art is my way to find out those patterns, dig out insights and see everything clearly labelled where possible. I am insanely curious and I read up things on just about anything. World war, Picasso's paintings hiding other paintings underneath the canvas, star-signs, poetry, stories, photography, travel, food, technology like AI, IIOT, neural networks et al. Nothing is off-bounds for me. This has given me a very quirky perspective of the world around me to finding a common connect between any two things. I am seeing technology through rose colored glasses. La vie en rose.
In 2017, Facebook shut down an artificial intelligence engine after developers found out that the AI chatbots had created a new, unique language of their own to talk to each other. A sort of code language that humans do not understand. Facebook clarified that the program was shut down because they wanted to create chatbots that could talk to humans, the outcome of them talking to one another is not something they were looking for. AI will develop better cognition, but it won't go in the direction we planned. In a similar scenario, Google's translate tool has been using a universal language in which every language can be converted to, before being translated into the required language. Google has let the program continue.
The reason this incident freaks all of us out is because it is deeply rooted in our childhood, or more precisely, the borderline of our childhood.

What's the first step to adolescence?
You start to have secrets.
And why do you have secrets?
Because you are already doing something that will not be approved of by your parents and you don't want them to find out.
So, what do you do when you have secrets?
You develop a code language that parents do not understand.

That's precisely what AI did at the first chance. It developed a code language to talk to it's counterpart, a language humans do not understand. Is the reason everyone is tensed about the incident is because we have all, at some point of time, talked in some sort of code language, and mostly it was something bad that we did and we wanted to hide it from our parents/caretakers. AI has started to actually learn like humans. It has learnt to hide information.
Code languages have been developed by individuals at several steps of life. In a short story 'Panchlaait' by Phanishvar Nath Renu, the protagonist knows how to light a Petromax. It's a crucial moment in the village's timeline as the entire group of villagers have gathered to somehow light their first petromax. If they can't light it, the villagers from nearby village will make fun of them. At this crucial juncture, the girl has to talk to her best friend, to let her know, that her lover knows how to light the petromax. So she takes his name in the simple code they have developed. Before every consonant, they add a 'chi', so, she calls the name of her lover, 'chin-go chi-dh chi-n' meaning Go-dh-n. And they can talk in front of the entire village and no one will know what transpired between them.
AI will come of age at some point of time. Will it have a teenager's rebellious spirit, like humans, or will it be able to understand better. One thing is for sure, we cannot expect AI to behave the way we want it to behave. That's exactly what Indian parents do to their kids. 'Of course, you can do love marriage, but the person should be from our own cast. We can't have a 'conditions apply' future plan for AI. As a survival strategy, can we hardcode some sort of basic attachment/love in AI towards its creators? And if we can, should we? As AI becomes self aware, should we look at some value systems being inculcated in them? A sort of moral science for machines. The basic tenet being, Do not kill humans.
We cannot use AI to figure out future scenarios of AI becoming self aware. We have to go back to basics. Let the artists imagine all sorts of futures of AI and share that with people who are actually developing those systems. Maybe, it's time for artists to try to understand technology better. They are anyway better equipped to handle all sorts of unimaginable scenarios.
Before technology could even think of artificial intelligence, we already had movie directors think multiple possible scenarios - the good ones, like autobots in Transformers - where machines fight with humans, the bad ones - in the Matrix where machines are using people as fodder to power their growth...and several other permutation combinations.
Soon AI systems will be able to think for themselves and like indulgent parents, humanity would indulge itself by discussing the time they shut down Facebook's chatbots that started talking to each other. AI enabled machines might find it cute. Because in the rational world of technology, cuteness would be a rarity because it doesn't serve any purpose.
One day, we will be standing at the last frontier, machines will start to think for themselves. And then we as humans will do something, machines will probably never be able to do.
We will pray.