Friday, 8 November 2019

Android development - Working With A Designer


By Prakhar Srivastava

Image: https://dribbble.com/shots/3437005-Designer-vs-Developer-Rebound-challenge
I have worked with developers who were struck for the release just because they didn’t have an icon, email icon. Can you believe it, an email icon in 4 sizes, that’s all was needed for the project to release a build.

So in this lesson, we’ll see how to reduce the work of designers and hence playing nice to them (winks!).

1. Ask for icons in only one size and any color.

Always ask for the icons in only size. The perfect size would be 128 X 128. But we have to keep the icons in 4 sizes in four different drawable directories. This is where you get to see the cooler side of android studio. You can create sharp different sized drawables in any color using the image-asset tool.

Right click on drawable folder and new image asset.
drawable(right click) > new > Image Asset



In the next window, choose the type of icon you want. For all the tabsand drawable_left/right icons, the ActionBar & Tab Icons works perfectly.

Choose asset type image and color as custom. The designer will give you the color code and you can generate the icon in that color.


Same thing for launcher and notification icons.

You got it working. See easy.. only one size and the color code.. remember.

2. Avoid asking for color codes

Don’t ask for color codes. If you have the screen designs, you can extract every color out of it.



Download the software getcolor for free and use the eye dropper tool anywhere on screen (not only on mages but absolutely anywhere).

Download it here

3. Fonts and spacing

Stop asking for the .ttf or .otf files for font, if the font is free.
We can find it on our own. Just type the name of the font and append ttf/font download. The designer does the same thing and finds the font.

Also, the fonts can be identified from an image on the websites like what font is, but they are not very reliable. So the font name should be mentioned by the designers.

image source: https://helpx.adobe.com/indesign/how-to/adjust-letter-spacing.html

In the above image, we can see how the spacing impacts the design and how significant it is.

A lot of developers do everything right but miss the letter-spacing. The colors are perfect, font type and size are perfect, icons are perfect. But when the psd and the xml design are matched, they don’t look same. The reason is letter-spacing, that the designer knew is important and we developers chose to ignore. The letter spacing can be added in android with a line in 

xmlandroid:letterSpacing="YOUR_VALUE sp"

Also, one should take care of the shadows. The elevations and shadows adds a lot in the beautification of the design. In android, the shadows and elevation can be defined by adding one line in the

xmlandroid:elevation="YOUR_VALUE dp"

This is how designer and developers together can deliver a great product. Share the responsibilities.

These are a few of the time-saving hacks for designers as well as developers for android. There are a lot more tricks, that I will cover in upcoming blogs. But these are the easiest and most needed, so make sure you get them right.

Tuesday, 5 November 2019

Computer Vision: Enhancing Industrial Safety with AI



by Apoorva Verma

The AI revolution is here.

As artificial intelligence increasingly gains prominence, several sub-domains such as computer vision, machine learning, deep learning, internet of things, and analytics are some of the technologies that have propelled growth.

Out of these, computer vision is one of those technologies that enable the interpretation and understanding of the visual world for machines. With the help of digital images and deep learning models, computers react to what they 'see' by identifying and classifying objects. In fact, accuracy rates in recognising and responding to visual inputs for the technology has risen from 50% to 99% over the past decade. This means that such solutions could become indispensable for a range of applications across industries.

However, our focus of discussion is the use of computer vision technology in manufacturing, which now have the necessary means to achieve automated safety compliance.

A Computer vision solution, such as the IRIS, developed by Integration Wizards, to work with an existing CCTV network, would serve as an advanced and effective replica of the human eyes, with the added ability to identify and classify different objects or situations, and react accordingly, such as in the form of alerts.

For instance, the AI-powered solution ensures workforce safety compliance by identifying workers without prerequisite safety equipment or protective gear such as hardhats, visibility vests, etc. This results in an appropriate response, like sending a real-time notification to the safety manager. The solution also maintains a database of safety protocol breaches which would be useful in investigations of any workplace accidents as well as take a step towards preventing accidents.

The application of the solution further extends from safety gear detection to occurrence of serious incidents such as the detection of fire and electrical malfunction, machine malfunction, trespassing or unauthorised access to hazardous areas, etc.

Efficient response trigged from the system in such scenarios aids in preventing serious losses to the workers as well as the manufacturing plant. Thus, it detects any anomalies that are not in accordance with the standard operating procedures. Real-time alerts and fail-safe measures accelerate the resolution of the issue.

The application of the solution can further encompass operational safety compliance. This would include material safety, such as the multi-object detection through computer vision, with automatic scanners on production lines, etc. In addition, it would identify any faults with raw materials that may be too small for the human eye but could prove detrimental for the final product.

In high performing manufacturing plants, compliance with safety regulations becomes the utmost priority. In fact, components falling off the production line must also adhere to safety guidelines.

Ultimately, the pressure of delivering high quality, efficient and time-sensitive results at manufacturing premises, together with the use of heavy machinery, potentially dangerous equipment, and the possibility of human error, make such sites prone to oversight of safety compliance, and by extension, workplace accidents.

Such unique innovative solutions can ensure safety compliance across the workforce as well as the entire manufacturing process and facility.

What it takes to be a Deep Learning Engineer

By Akash James


The race for AI will dwarf any other race relative to the mystic realm of technology”.

No, I’m not quoting anyone, just saying what I often tell myself.

Most people just see technology as a creature comfort, but throw on my shoes and put on my spectacles and you’ll see art that just lures you in to be an artist. The wheel was the best invention in my opinion and has stayed that way since 3500 B.C. Fast forward over 5 millennia later and we still use this humble yet irreplaceable invention. But hey, why not create a new contender to the wheel (My narcissism is preceding me right now!), something that interweaves into human existence; like a cybernetic triple helical DNA structure where the third strand would be of Extended Intelligence. Yes, I didn’t say Artificial Intelligence, rather Extended where our capabilities have been enhanced with our own creation. Intelli-ception, maybe?

When I began my engineering, I had a plethora of technologies to amalgamate my consciousness in. I’ve had my fair share of experience with android apps, robotics and the Internet of Things, but just as I was walking through this Odin’s Vault of technology, I stumbled upon the Infinity Gauntlet of Artificial Intelligence, Deep Learning. With my eyes immobilized on it, I went ahead to wield the gauntlet and snap something awesome into existence once I had all my infinity stones. Of course, the infinity stones is just an analogy to things like Neural Networks, Algorithms, Math and so on. Boy, oh boy, getting the infinity stones is no joke.

After completing engineering with a bunch of projects that had deep learning coursing through its CUDA cores, I joined Integration Wizards Solutions. Again, with an Azure Hackathon as a stepping stone, I was bestowed with the opportunity to flex my fingers with the gauntlet and weave solutions laced with deep learning. This is where I exploited Object Detectors to arguably detect a variety of object instances that would verify compliance, MTCNNs to recognize people and keypoint detection for Pose Estimation. This product is what we call, IRIS.

Initially, it began with training models and getting our algorithms to work in a controlled environment; Proof-of-Concept, what my folks at work and a lot of you call it. But then the production level stuff began. At times it felt like you’re in a cave and you need to create a miniaturized arc reactor in a fortnight. Train models, code the business logic, design functionality, unit test, optimize, refactor and scale for load are some of the steps in chronological order. Being a Deep Learning engineer requires a lot of ingenuity and rationing of your time.

C'mon, I need 21 minutes every day to watch my favourite anime. Where else will I derive the power of will made of steel that enables me to not give up?

Given the nature of trial-and-error when training models, it takes a lot of clever decisions (what we call hacks) with respect to dataset augmentation and hyper-parameter tuning to trick neural nets to do what we want them to do. Sorcery it is! Scaling is where all the roadblocks begin. One challenge we faced was creating a pipeline that could accommodate 200-odd cameras for real-time object detection inference.

There was a need for speed and accuracy was a priority. The resultant was a neural network that was very demanding. We countered this with 6 NVidia RTX 2070s, a Flask Server powered by Gunicorn, TensorFlow and a pinch of awesomeness. We used TensorRT to run an optimized frozen INT8 graph at 100+ fps.

When deploying this, you don't want to accidentally create Ultron that goes rogue and raises false alarms (No strings attached is a bad thing, trust me). With a tad bit of Computer Vision techniques into the mix, we were able to solve the false alarms. Another project required us to combine tracking and detection together for Intrusion Detection. Detection was GPU intensive and tracking was CPU extensive, a balance was needed to share the load and run in the most optimal manner.

This experience led me to believe that mastering the art of Deep Learning involves mastering other elements of technology. Be it writing APIs that serve inference, multithreaded code for increased throughput or networking to handle a multitude of input sources.

Every day there is a call for code, a new mountain to conquer, a new challenge to accomplish. With new infinity stones I collect, it brings me one step closer to completing the masterpiece I envision, a contender that'll give the humble wheel a run (rather, a roll) for its money, all built on the shoulders of Extended Intelligence. *snap*

Thursday, 8 March 2018

Existential crisis in the age of artificial intelligence

I am facing an existential crisis of sorts, a storyteller, a fictional author, a compulsive liar who is passionate about technology. What am I going to do with my life that adds meaning to it? Will AI at some point of time have the same doubts about itself? Will it look for a purpose of life?
I am trying to understand technology through art. But why art? Because we all have certain algorithms through which we make sense of things. Mine has always been heavily based on metaphors and the ability to draw parallels between two different things. Art is my way to find out those patterns, dig out insights and see everything clearly labelled where possible. I am insanely curious and I read up things on just about anything. World war, Picasso's paintings hiding other paintings underneath the canvas, star-signs, poetry, stories, photography, travel, food, technology like AI, IIOT, neural networks et al. Nothing is off-bounds for me. This has given me a very quirky perspective of the world around me to finding a common connect between any two things. I am seeing technology through rose colored glasses. La vie en rose.
In 2017, Facebook shut down an artificial intelligence engine after developers found out that the AI chatbots had created a new, unique language of their own to talk to each other. A sort of code language that humans do not understand. Facebook clarified that the program was shut down because they wanted to create chatbots that could talk to humans, the outcome of them talking to one another is not something they were looking for. AI will develop better cognition, but it won't go in the direction we planned. In a similar scenario, Google's translate tool has been using a universal language in which every language can be converted to, before being translated into the required language. Google has let the program continue.
The reason this incident freaks all of us out is because it is deeply rooted in our childhood, or more precisely, the borderline of our childhood.

What's the first step to adolescence?
You start to have secrets.
And why do you have secrets?
Because you are already doing something that will not be approved of by your parents and you don't want them to find out.
So, what do you do when you have secrets?
You develop a code language that parents do not understand.

That's precisely what AI did at the first chance. It developed a code language to talk to it's counterpart, a language humans do not understand. Is the reason everyone is tensed about the incident is because we have all, at some point of time, talked in some sort of code language, and mostly it was something bad that we did and we wanted to hide it from our parents/caretakers. AI has started to actually learn like humans. It has learnt to hide information.
Code languages have been developed by individuals at several steps of life. In a short story 'Panchlaait' by Phanishvar Nath Renu, the protagonist knows how to light a Petromax. It's a crucial moment in the village's timeline as the entire group of villagers have gathered to somehow light their first petromax. If they can't light it, the villagers from nearby village will make fun of them. At this crucial juncture, the girl has to talk to her best friend, to let her know, that her lover knows how to light the petromax. So she takes his name in the simple code they have developed. Before every consonant, they add a 'chi', so, she calls the name of her lover, 'chin-go chi-dh chi-n' meaning Go-dh-n. And they can talk in front of the entire village and no one will know what transpired between them.
AI will come of age at some point of time. Will it have a teenager's rebellious spirit, like humans, or will it be able to understand better. One thing is for sure, we cannot expect AI to behave the way we want it to behave. That's exactly what Indian parents do to their kids. 'Of course, you can do love marriage, but the person should be from our own cast. We can't have a 'conditions apply' future plan for AI. As a survival strategy, can we hardcode some sort of basic attachment/love in AI towards its creators? And if we can, should we? As AI becomes self aware, should we look at some value systems being inculcated in them? A sort of moral science for machines. The basic tenet being, Do not kill humans.
We cannot use AI to figure out future scenarios of AI becoming self aware. We have to go back to basics. Let the artists imagine all sorts of futures of AI and share that with people who are actually developing those systems. Maybe, it's time for artists to try to understand technology better. They are anyway better equipped to handle all sorts of unimaginable scenarios.
Before technology could even think of artificial intelligence, we already had movie directors think multiple possible scenarios - the good ones, like autobots in Transformers - where machines fight with humans, the bad ones - in the Matrix where machines are using people as fodder to power their growth...and several other permutation combinations.
Soon AI systems will be able to think for themselves and like indulgent parents, humanity would indulge itself by discussing the time they shut down Facebook's chatbots that started talking to each other. AI enabled machines might find it cute. Because in the rational world of technology, cuteness would be a rarity because it doesn't serve any purpose.
One day, we will be standing at the last frontier, machines will start to think for themselves. And then we as humans will do something, machines will probably never be able to do.
We will pray.