Monday, April 4, 2016

Quantum Computers -- The next 'WAVE' of Computing....

Over this weekend, please join me as I delve into the world of "Quantum Computing" and share my personal thoughts on this rather exotic and fascinating subject....

NOTE: For those who do not have time to read the full article, please have a quick glance at the attached illustration where you will see "Raman Rays" named after the great Indian Physicist Sir C V Raman... The contributions to the arguably most complex field of science viz. Quantum Physics in the first part of 20th century by Indian Scientists like Dr J C Bose after whom the GOD particle or "Higgs-Boson" particle is named and Sir C V Raman who is credited with the discovery of "Raman Effect" and the evolution of the intricate theoretical physics and mathematics behind the same are indeed remarkable...

We have all heard of "Quantum Computing" which is the next big wave in the field of computing and is arguably going to unleash the huge turbo power boost much needed in the current computing world...




With the Moore's law already slowing down in its tracks and Microprocessor pioneers like Intel delaying the next release of their chips as they are battling the engineering challenges while trying to fit more transistors in a square nano meter of space, alternatives to Silicon chips need to be actively explored. While Intel managed to release the current generation of Xeon Chips in 2015 with inter-transistor gap of 14 nm compared to the 22 nm in their earlier generation of chips, the next generation of chips with 10 nm gap would probably be released only towards the end of 2017.

And this challenge of the declining returns from the Moore's law will only be all the more accentuated in the coming years and in my minds eye I see the delta increments in technology falling down drastically by 2025...
And if we are to keep producing faster and faster computing chips together with lowering storage costs that are quintessential to make discernible progress in the newer fields of computer science such as Artificial Intelligence and Robotics as well as IoT which will inturn result in huge value addition to the human race, we desperately need the following: Rapid progress in the fields of Quantum Computing (Disruptive Innovation) and Nano Technology (Radical Innovation).

The quantum computers that are being touted as great innovation today are in reality only at a very low level proof of concept stage and cannot be used to do anything useful other than solve a few preprogrammed arithmetic operations. There are 2 sub-categories that are being pursued in the field of Quantum Computing: 
1) "Quantum Annealers" which are more akin to the the Application Specific Integrated Circuits(ASIC) of the yesteryears which are used to implement one particular algorithm only and cannot be reprogrammed for any other general purpose tasks.  

2) "Universal Quantum Computers" which are the general purpose computing platforms akin to the Microprocessors of today which can be used to perform any general purpose computing tasks and can be programmed on the fly like their Silicon counterparts from Intel today....

Despite the huge amount of hype and hoopla surrounding the area of quantum computing today and claims of huge wins and achievements in this field , all the working prototypes built today ( which work only in carefully controlled laboratory conditions ) belong to the category of "Quantum Annealers" which are capable of running only a single algorithm. However the point to be noted is that if we choose the one right algorithm say a machine vision algorithm from Google or Natural Language Processing algorithm from Microsoft Skype, then that one specific chosen algorithm will run at an unbelievable speed of "100 million" times more in a Quantum Computer than the speed at which it runs on a conventional top of the line CPU available today from say Intel. In the world of AI and Machine learning where the key contributor to the functionality being developed is the computing speed, a "100 MILLION-X" improvement in the computing speed will mean wonders and realisation of unimaginable functionalities in the world of computing....

The picture shown below is from an MIT Tech Review, April 2016, article on the latest Quantum Computer showcased a few weeks ago at a top American University R&D lab. This Quantum Computer is only a "5 bit" computer whose circuitry can handle only 5 bits of data at a time. The circuitry and the quantum chips operate at a temperature of -272 degrees centigrade in the realm of superconductive speeds which brings to the fore certain special quantum properties which make the superlative high speed computing a realistic possibility. Perhaps one would be tempted to compare this '5-bit' computer which runs a "single algorithm" that solves certain mathematical problems such as Laplace and Fourier transformations with the Intel 4004 Integrated Circuit which is a '4-bit' silicon computer that was released a whopping 45 years ago in 1971. One might also be tempted to compare it with the modern day commonly available 64-bit computers that sit on every desktop and confidently conclude that the latest quantum computing invention is no big deal.

However it is indeed a big deal as the overall computing speed depends primarily on the following factors: 1) Processor Bits and 2) Processor Speeds & 3) Processor Parallelism

It is to be noted that the bits processed has increased only 4 fold from 16 bits in the first Intel 8086 Processor released in 1978 which is still the primary Architectural backbone and acts as a template even for the 64 bit Intel Xeon Chips being produced today....

The primary factor among the three key factors that determine the overall computing speed is the raw processing power of the microprocessor chip.

So the state of the art "Quantum Annealer" chip that was showcased in March 2016 and whose architecture is shown in the MIT Tech Review illustration below may be only a 5-bit special purpose circuit which is capable of running only a single math algorithm and not capable of being re-programmed or being  any where close to a general purpose microprocessor such as the 8086 the late 1970s but it makes up for all its short comings by running at a speed that is "100 Million X" times that of a modern state of the art Silicon based CPU.....

To put things in context the human brain which is the most powerful and fastest supercomputer known to us today runs at a speed of 1 Billion X times using the complex and intricate connections of hybrid bio-chemical and electrical circuits aka neurons with their superbly tentacled web of connectors called as synapses...And on top of this raw computing biological circuit speeds is the incredible amount of massive parallelism which is simply unprecedented in the computing world....

Assuming that the Moore's law will continue giving the same returns as it did over the last three decades it will take yet another good three decades from today before the silicon microprocessors catch up with the theoretical speed possible via quantum computing and possibly a good 4 decades from today to catchup with the processing speed of the human brain... However as I argued earlier the Moore's law is showing diminishing returns already and will most likely hit a plateau in about a decade's time making the above conjectures null and void...

If the dreams of computer scientists over the last century of building a computer whose capability equals that of the human brain are to be realised the only way is thru' the advancement of the "Quantum General Computing"....

In my mind's eye I see the "Quantum General Computing" evolving to the stage where it can power Robots with near human like intelligences in about decade or so from now....
As the ancient Chinese proverb goes, "May we all live in interesting times..." 😊

Note: One might wonder as to why I have been only talking about the Intel chips as a baseline while comparing Silicon Transistor vs other technologies. What about the cutting edge chips in mobile phones and tablets which are supposedly the in-thing today? Unfortunately the microprocessors and other computing chips used in the so called hi-tech modern gadgets such as tablets or mobiles are no where near the cutting edge of innovation in the field of chip design. Intel still continues to lead the race in the innovations belonging to the realm of silicon computing chip design...😊

#Musings  #Technology  #Future  #Computing  #AI

Saturday, January 9, 2016

Today's AI -- IS IT TRULY DISRUPTIVE?



The launch of the book titled ‘The Second Machine Age’ by Andrew McAfee of MIT Media Labs in early 2014 triggered off a series of debates, heated arguments and expression of opinions on a range of topics such as ‘Robots ruling over human beings’, ‘Computers replacing humans in their jobs’, ‘Ethical, Philosophical and Moral implications of various possibilities opened up by Artificial Intelligence’ and so on. The year 2015 has indeed witnessed what could be termed as a “CAMBRIAN EXPLOSION” in the field of Artificial Intelligence. The spate of progress in this field during this one year alone has astonished many AI experts and engineers as progress of this magnitude was simply unexpected. The incredible euphoria surrounding the pace of advancements in this field will continue in the year 2016 as well and we will witness computer systems and Robots doing things that were confined to science fiction till last year.  Like all other technological revolutions that preceded it, the Digital revolution will certainly cause disruptions to the business models and the industry structures. Technologies like Big Data Analytics and Internet of Things (IoT) will cause huge shifts in the way the businesses are conducted in future. Similarly Artificial Intelligence will reduce the need to employ workers in the low-skill or low-education category as well as reduce the workforce in the middle management layer.

The interesting point to note however according to me is  that Artificial Intelligence, Big Data and IoT are being made possible today primarily because of the cheaper and miniature versions of the electronic circuits available today thanks to the fact that more and more silicon transistors are packed tightly in one micron of space whose growth rate is defined popularly by what is known as Moore’s Law.

Relying on incremental increase in computing speeds and lowering of computing costs in a predictable manner  every year for  the last several decades is not something that can be called as "DISRUPTIVE INNOVATION"  or path breaking in the truest sense of the word. If one really looks at the computing world not much has really changed in terms of basic  technology per-se from the time, Alan Turing cracked Enigma code during World War 2 using the computer based on Von Neumann architecture in 1940s to the IBM Mainframe with DB2 in the 1960s to the Big Data systems of today where the data base is only far more larger and fairly old and established statistical tools and techniques are applied on processed data. The IoT is in its most essential form is a miniaturisation of an A/D and D/A convertor Integrated Circuits that have been available for decades now. And yes the most talked about subject of Artificial Intelligence has not changed in a disruptive fashion from the days when Alan Turing presented his paper on ‘Turing Machine’ in the 1950s to the ‘Multi Level Perceptrons’ of the 1960s based on theoretical Mathematical techniques such as Sigmoid Activation Functions, Convolution Operations and Feed Forward Neural Networks et al which had good implementations of working Algorithms in place as early as in mid 1980s.


In a nut shell the lack of a fundamental disruption on the technological front might never be able to create a fundamental disruption to the business systems or macro-economies in the long run.

"This might just be an ephemeral phase in the history of technological innovation and like all those that preceded it, this too shall pass by..."

Note: 
I did quite some research on the topic of Artificial Intelligence with specific concentration on the topic of SINGULARITY... I have read arguments both in favor of and against this eventuality happening as well as the arguments on whether or not it will ever happen during our life times... Starting with Nick Bostrom's seminal book and articles on "Super Intelligence" to the philosophical, moral, ethical and technical debates on his masterpiece to the work done by Jeff Hawking titled "On Intelligence" to the book by Ray Kurzweil titled "How to create a mind?" and so on.... I then ventured into the area of Neuroscience and looked at works of the top guns such as "Sebastian Seung" centred around CONNECTOME which looks at advanced possibilities in AI and topics in medicine such as Downloading Ones Brain & Neural Circuitry for posterity etc.... The last but not the least were the papers and articles by Peter Diamandis from his "Abundance" series.... And finally my own grounds up programming experiments which I have personally carried out in the field of AI and Deep Learning based on my hand written programs implementing some of the latest algorithms just to get a first hand feel of the topic... And at this point of time I guess what I have written in this article us based on my gut feel and understanding... I could change my mind after a few months though based on the new developments in this field...

Making a young girl fall in love with Computer Science...



I touched and felt a computer for the first time in my life in October, 1992, a week after joining the engineering college. I recollect that in my first year engineering, I simply loathed the subject called 'computer programming' and vividly remember arguing for 20 minutes with my professor that I could do everything using my scientific calculator that she was saying could be done by a computer. I was very frustrated when my clarity on the topic was no better at the end of the argument. I then spent next 2 weeks reading the prescribed text book titled 'Programming in Fortran 77’ by V Rajaraman, multiple times which helped to some extent but at the end of all that, I still disliked computers. I strongly felt that I and they do not understand each other very well and can never be together. Then I firmly made up my mind that after completing my first year, I will never again look at a computer in my life. I was in Mechanical Engineering at that time which did not have any more courses on computers after the first year. I was also aspiring to do an MBA after my engineering and this further added firmness to my resolve.
After finishing the first 4 months in first year engineering, I expressed my anguish and frustration to a close friend of mine, who was regarded as a computer whiz kid in college, and confessed that I HATED computer science. He said nothing and simply gave me a book on 'Introduction to Programming' from Schaum Series and that helped me a lot to understand the real power of computer programming. I also chanced upon a book called ‘Applications of computers in the business and industry’ that for the very first time gave me a very broad perspective and helped me discover the potentialities and possibilities in this field. I had no clue till that day that computers could be useful to mankind in so many ways.  At that point I realized that computers will play an important role in future and hence I resolved to work on the subject deeply and know as much as I can within the remaining part of the year. I was still quite firm that I will abandon computer science after my first year and therefore it was important that I learn whatever is possible within this limited time.
During next 7 months, unbeknownst to me I got quite intimately involved with the subject and before I realized, I was madly in LOVE with computer science. I was so much in love that I shifted to Computer Engineering branch after the first year and an intense and raging affair with computer science ensued ever since.  I did finish my MBA as aspired and then chose to join the computer industry in the management stream and remained there. Computer Science continues to remain my first love even today and the intensity of love remains as much now as it was then.  This is one of those decisions in life that I will never regret.
Computer Science is a philosophy and not a technology. It is a way of thinking and perhaps even a way of living.  In a nutshell, Computer Science has transformed me in every possible way and has influenced varied aspects of my personality such as: mind set and thinking, self esteem and confidence, way of looking at the world, approach to problem solving, approach to technology, reasoning and higher order thinking and so on. Nothing in me has been left untouched by this amazing subject [It probably merits a full fledged article to explain my fascination for this subject!!]
It surprises me that even after two and a half decades, the percentage of girls who pursue computers and mathematics is very low and the reason being solely the lack of interest and awareness. There are several reasons why many youngsters might not be interested in the subject as it is unique and exotic in its own way and has an Γ©lan of its own kind. It is extremely important that a correct approach is adopted while introducing the subject to a novice. The right kind of appreciation is needed to be developed before one can start liking the subject and discover the elegance and beauty of computer science.
The discovery of my love for computer science was purely accidental and the probability of that happening in those days was very very low. It took me one complete year and a very frustrating one at that. There was a very high probability that I would have abandoned this subject for good. The probability of abandoning the subject and considering it as a drudgery holds good for students at the university even today. I am not talking about learning a programming language or learning a technology or a platform which is very easy and one might be driven to do that due to the lure of a well paid software jobs. It is probably easy to accomplish this in developing countries where one tends to be driven by job market and money as one might not be able to afford to be driven purely by interest or passion.  But if we really want to increase the participation in this field in a country like the U.S. where typically many people tend to choose to work in the area of their passion, one needs to be able to generate that intense passion and fervor.
It has a lot to do with creating a mind blowing first impression, generating a deep urge to know more, creating an enabling atmosphere for getting intimate with the subject, developing a mindset that will allow one to realize and then fully absorb its awesome beauty, and finally falling in love with it. In the context of the efforts being made by the U.S. government and corporates to generate interest in the STEM area especially among young girls, it is very imperative to note that the goal is in reality all about generating passion.  This needs to be looked as an endeavor that helps youngsters experience the ethereal beauty and immerse them in a magical world. The curriculum, reading materials, class room sessions, practical sessions on computer etc should be carefully developed  with the aim of being able to create ‘an experience like never before’ in the minds of youngsters and must focus on their feelings, emotions and mindset as a central theme. It is all about making them fall in love that lasts a life time and not about making them learn a technique or luring them with good career prospects.
What concerns me is that the interest shown in computer science by many youngsters and especially young girls remains lukewarm even today.  I was some what surprised when I read somewhere that in subjects like computer science, girls constitute only 15% of the overall students in the class at University level. And these are numbers for the U.S. which is arguably the most progressive country in the world. I am aware of several initiatives by U.S. Government and large corporates in the last few years to increase the awareness and ultimately the representation of women in the STEM field but we indeed have a long way to go before we would have reached the goal. On the contrary, countries like India beat the trend with 37% women enrolled today in computer science at university level. It is very startling to note that the corresponding number in the U.S. was 35% in 1980 and it steadily dropped over the years to a low of 10% in 2010 and then rose slowly to reach 15%.
I found the following observation by a researcher to be quite interesting and profound: “The omission of women from the history of Computer Science perpetuates misconceptions of women as uninterested or incapable in the field”.
Keeping this in view it will be a good idea to familiarize our young girls with the "history of computer science" and the path breaking contributions to this field by women, in 19th and 20th centuries, which would inspire them. There is a lot of information on the Internet and even many books are available on this topic which I am sure could generate interest amongst the youngsters on this seemingly esoteric but in reality an adorable subject.
Let me quickly talk about the four women who in my opinion were the pioneers in the field of computer science during their times and hopefully this will act as a starting point….
Ada Lovelace could be termed as the “First Lady of Computer Science". She was a great mathematician and was the first woman to actually write a computer program in as early as 1843 based on a very complex algorithm and executed the same on what was called as ‘Charles Babbage Analytical Engine’ which is the precursor of the modern day computer. The programming language used for developing modern day ‘Real Time Military’ software systems is named as ADA in her honor. ADA is a very structured and evolved programming language with a number of innovations which could be termed as a work of ingenuity. The ‘C’ programming language of today inherited a number of concepts and principles from ADA.
Grace Hopper was one of the very few computer scientists who worked for the U.S. Navy in 1950s. Grace conceptualized and designed a new programming language for use in business applications and also wrote a full fledged compiler for the same. The modern day COBOL programming language has a lot of resemblances to the one that was designed by Grace.
Edith Clarke was initially trained as mathematician and worked in that area before proceeding to pursue her Masters in Electrical Engineering from MIT in 1918. She was the first woman to be awarded that degree. Edith pioneered the usage of advanced mathematical methods in computer science. She leveraged mathematics and computer science solve complex problems related to electrical power systems. This was unheard of in those times and forms a basis for the modern day computer based automation and control systems used in the industry today.
Hedy Lamarr was a very popular actress in her times who also made significant innovations to the field of wireless communications. She pioneered the concept of ‘Frequency Hopping’ which is the foundation for modern day Data/Wireless communication technologies. The application of her invention in those days was to actually control and guide the ‘torpedoes’ even while in motion and prevent them from being intercepted by the enemy leveraging the concept of frequency hopping. She was far ahead of her times to have been able to conceive this idea and for others to be able to comprehend what she was talking about. Those were the Second World War (1939-45) days and she was a very famous actress who was very well known for her ‘stunning looks’ and ‘ethereal beauty’ and not many knew of her intellectual abilities. Electrical engineering was an area that she picked up as a hobby during her spare time which indeed speaks volumes about her intellect. The invention that she made however was dismissed right away by the naval authorities who asked her to stop wasting time on areas that she does not understand and instead use her popularity for raising funds needed for the war. This was an area that could have potentially altered the history of the war. The advanced missile systems and anti-missile systems of today accomplish similar objectives as envisaged by Hedy Lamarr in the 1940s. It was 1997, when her contributions were finally recognized and she was honored with a special international award for being a pioneer in the area of wireless communications.
Better Late than never. Isn't it?

Wednesday, December 30, 2015

Alpha Intelligence -- Behind the scenes...

.
Last Week, I launched my website by the name "Alpha Intelligence" dedicated to the fields of Artificial Intelligence, Machine Learning & Deep Learning....

The website features a few of the AI based interactive demos and intelligent agents that I have programmed myself...Today I will attempt to delve a little deeper into 'behind the scenes' view of things and my logic and rationale for choosing those specific technology use cases for building my AI agents.....

The first interactive demo that recognises handwritten characters that are chosen by the visitor to the site is the fundamental building block of Artificial Intelligence and has been used as an international benchmark that is accepted globally for determining the quality of the AI algorithm based on the accuracy of its predictions. The program showcased by me on the Alpha Intelligence website has 94% accuracy in recognising the handwritten characters....I have also experimented with a more advanced version of this algorithm that gives close to 98.5% accuracy....

The large sample of handwritten digits for training and testing of these algorithms  have been picked up from real life based on the handwriting samples of a wide pool of people during the process of collecting data for the United States Census. This sample is referred to as the MNIST dataset and acts as a global bench mark in the field of AI....

Though this is a simple example of AI in action, it is not difficult to see the benefits that can accrue to us by this technology...The first successful real life use case that has been in vogue is the automatic reading and processing of cheques by some of the large American and European Banks....The accuracy is usually close to 100% as people are more careful in making sure that they write more legibly on bank cheques compared to a mundane task such as filling up data in a Federal Census questionnaire....
There are a number of other use cases such as automatic processing of handwritten application forms et al. The success rate is very high when it comes to areas where one expects a limited set of inputs and therefore the range of outputs to be given by the handwriting recognition algorithms are also naturally limited in scope...Needless to say processing structured data is far more easier than processing unstructured data...

The second demo that I have built and showcased on the Alpha Intelligence website is very pivotal to the field of AI as the concepts and algorithms used for the interactive  demo are the brain behind more complex functionality such as robotic vision and other advanced image and object recognition tasks including recognition of objects in motion such as video footage....

I did receive feedback from a few of you that the quality of images used in the second demo on the website that automatically identifies and labels objects was not very good and that the resolution was low.... That was actually intentional as the benchmark data used here which is the CIFAR-10 dataset is meant to test the limits of the AI capabilities and the AI system has an objective ultimately of achieving a vision that is far better than the human vision. The intent being that robots and other AI systems should be able to recognise and label even blurred and low resolution or even distant images or partial images so that they can act as vision enhancing aids to human beings. Such robots can be used in military applications such as carrying out reconnaissance missions or even act as an intelligent assistant to visually impaired people thereby greatly enhancing the quality of their lives and  giving them a huge boost of independence....

The program that I had developed for the demo on the website is only 85% accurate and there is some more distance to be travelled for enhancing the accuracy rates....The algorithm implemented here follows very similar concepts as those used by Google for automatically labelling images on Flickr. A few months ago there as a furore on the social media when the  Google Image Recognition algorithm mistakenly classified an African-American young lady as a 'Gorilla'.  Image recognition in its inception stages and we have quite some distance to travel in this space before we can put it to use in real life for mission critical applications....

And the final AI demo that I had programmed and showcased on my website was that of automatic translation from English to French.....This is infact the most complex of all the three demos that I had showcased on my website....
This is based on the concept of back to back Recurrent Neural Networks with LSTM and Sequence to Sequence modelling.

If we indicate the computational complexity of the first demo (character recognition) as X then the complexity of the second demo (automatic labelling of images) would be 10 X and that of the third demo (language translation) would be 100 X....

Language and other cognitive skills that come so naturally and easily to humans are on the other hand very very difficult for the computer systems to comprehend and decipher the code.... It was only in mid-2015 that the 'Google Translation Engine' and 'Microsoft Skype Translation Service'  have been able to achieve near real time responses and near human level accuracy...

So what's  next on the Alpha Intelligence website?

I am planning to work on adding additional functional capability to the 'Alpha Translation Engine' such as translation from 'English to German' and 'English to Spanish' in the coming days....

The other endeavour that I intend to undertake is the replication of the "Google Street View" advanced vision and image recognition algorithm that automatically scans the streets for house numbers and automatically recognises them and then stores them in character format in the "Google Street View" database....This was one of the biggest breakthroughs in the field of AI during the year 2015 and I am sure it will be real fun trying to take a shot at it and see where it will lead me to.....😊

Have a great Weekend !!!

 #Musings  #Tech  #AI  #Robotics

Launching Alpha Intelligence....

Artificial Intelligence has been a subject that fascinated me since my engineering days... It's more of a passion or even a hobby to attempt and keep myself updated about this subject. For some strange reason or other, I never got a chance to directly work in this area professionally after my MBA...And I still remain an "amateur" in this field even till date...

2015 can be rightly called as an year that witnessed a "Cambrian Explosion" in the field of Artificial Intelligence...And industry leaders like Google, Facebook and Microsoft are unanimous in stating that the year 2016 will be the 'year of artificial intelligence'.

To get a firmer grasp of this subject which has always fascinated me,  I decided to go ahead, drill down, do some "grounds up" AI programming using a clean slate approach implementing some of most recent AI, Machine Learning and Deep Learning Algorithms...

I have created a website on the CLOUD that hosts a few of my 'grounds up' programs in a 'LIVE INTERACTIVE MODE'.

www.alpha-intelligence.net

It would really encourage me if you could check out these interactive demos and let me know what you think..

This humble attempt of mine obviously cannot hold a candle to the highly advanced and superior technological work being done at Google, Facebook, Microsoft et al and hence will need to be looked at in a manner akin to the way one views a piece of hand-made art or other handicrafts developed using limited resources, knowledge and means....😊

Hope you enjoy the experience of interacting with the AI agents on my website.... And do not forget to tell me as to what you think of this endeavour....

www.alpha-intelligence.net

Journey through the Dark and Winding Alleys of 'Human Brain'......


Yesterday, I ran into a good old friend of mine with whom I had lost touch after finishing college. She and I were like-minded in some ways and had a fairly similar thought process though she was certainly more well-read and probably more intelligent. I recollect that in those days,  we would debate a lot on subjects such as English Literature, Human Behavior and Philosophy which were actually areas in which neither of us had any formal education. Even though these three areas were not even remotely connected to Engineering, we somehow found them interesting and worthy of debating. Now I have come to realize that these are not really three disparate subjects but these actually coalesce to a single overarching theme which defines, drives and explains the way human beings in a society interact with one another.


Great writers like Shakespeare, Jane Austen and Pearl S Buck became great only because they were able to deeply introspect, reflect upon and critically analyze human behavior, understand human psychology intimately and thus could establish a deep subliminal connection with their audience. They were not only able to understand the nuances and subtleties of human emotions, feelings and thoughts but also were able to clearly differentiate the various hues and shades within the human feelings and emotions while communicating in form of words and phrases to the audience. That would have required a mastery of English language as they would have had to select and map the exact word or a phrase or a combination of these from among multiple words of similar but differently nuanced meanings in English language so that the exact variant or hue or shade of the feeling or emotion reaches the audience without any transmission losses.  Similarly great psychologists such as Freud and Jung relied heavily on literature and history for understanding human behavior intimately and then attempted to define the workings of human mind and also cause-effect analysis of human actions and emotions. Philosophy is intertwined very strongly with human behavior as it helps codify the right and wrong and provides us with a framework for logical reasoning and experimentation. The credo or the accepted rules of human behavior have many of their roots here.

Now coming back to my old friend that I met yesterday, after an exchange of pleasantries, we discussed the key milestones in our careers and personal lives, shared notes about our families and then the topic quickly went on to our respective philosophies of life and how they evolved in the last two decades. Interestingly the respective thought processes evolved in a similar pattern over the years and by and large the core philosophy remained the same.

After she left, the first thought that came to my mind was that it would a great idea to author a joint research paper on a topic of import as it would be possible to work in parallel on different sections and integrate the work very quickly into a cohesive whole which would substantially save the throughput time. In fact if there was a way to identify the wiring patterns in the brains of people in a team, then it would be efficient to deploy similarly wired people in projects that require similar thinking and use a dissimilarly wired people in projects that would need diversity of opinions and ideas. I started reflecting on the views exchanged with her and the evolution of our respective thought processes over the years. It occurred to me that thought processes, attitude and mindset of a person are a function of: the basic wiring and programming of the brain that has its roots in the genes and the consequent re-wiring that happens continuously based on experiences, increased knowledge, changes to the belief system and core philosophy based on renewed view of life, changes in priority of things and a newer perception of reality based on changes in environment etc. The extent and rate of re-wiring and thus the speed and direction in which the brain evolves changes from person to person. If the start state, evolution of the brain and end state are similar in different persons then it is safe to assume the net effect of the inter-play of different variables that I mentioned above has resulted in a similar outcome even while the individual variables might have been different.

My thoughts then drifted to the question as to what would be the most simplistic structure of the human brain which could explain similarity of thinking process or like-mindedness. Being a Computer Engineer by training, the easiest explanation that occurred to me was that human brain could possibly be represented by logical blocks such as those pertaining to logical thinking, analytics, reading and comprehension, creative arts, science, inter-personal relationships, emotions and feelings, verbal communication etc. Each one of these logical blocks would be controlled by a sub-program which has a distinct programming logic which governs how that block functions and processes information. Thus each of the logical blocks would be controlled by a sub-program and all these sub-programs would be controlled by main program. Needless to say these sub-programs run in parallel and also interact with the other sub-programs. When we say people are like minded and have a similar thought process, we could perhaps conclude that the particular logical block in like-minded people has similar programming with probably a major part of the ‘brain wire code’ overlapping.

I started wondering about the unknown mysteries of the human brain which is arguably the most complex and intricate creation of God. Brain research in the recent times has advanced significantly although what is known is minuscule when compared to what is unknown at this point of time. In my college days though Artificial Intelligence was still in infancy stages, I was very optimistic in those days that computer science, communications and robotics will very soon enable the creation of a replica of the human brain if not the complete human being. When I entered the real world,  I was grounded by the reality that the rate of advancement in neuroscience, neural networks and biochemistry has been very slow and will also continue to remain slow in the coming years. It was traditionally attempted to simulate a human brain using a computer system which mimics step by step the functioning of the actual human brain as revealed to us by neuroscience. However the intricate details of the internal functioning of the human brain which are necessary for simulation via a computer system have remained elusive for many years due to the slow progress in the field of neuroscience.

However today, with the advances in technology [in the areas such as Artificial Intelligence, Machine Learning, Advanced Analytics, Spatial Navigation, Sensor Technology, Cognitive computing and Pattern recognition, Big Data etc together with the unprecedented rate of increase in computing power, memory, storage (thanks to Moore’s Law) ] have provided us several alternate ways to simulate the human brain which do not rely completely on granular details provided by neuroscience. The endeavor today is to build a computing system that will give identical outputs as that of the human brain when given the same inputs. Since the method of building a computing system that is an exact replica of the human brain (which means not only the output but also the processing and decision logic as well as sequence of steps should exactly be the same) does not appear viable at the moment due to the limitations of neuroscience, the alternate method of using a black box approach is widely being adopted. In this method we simulate the functioning of the brain by harnessing unlimited computing power and very large data-sets. The algorithms for the black box are developed on the basis of empirical research and trial and error techniques with the objective of generating identical outputs as that of the human brain when given the same inputs. This indeed does excite me and I will probably see a computer system that is as intelligent or perhaps more intelligent than human brain in my life time. 

Top Guns such as Ray Kurzweil (from Google Research Labs) say that the day when artificial Intelligence will surpass and outwit human intelligence is not very far off. However this could potentially have very dangerous and unpredictable ramifications. There are fears that if computers become more intelligent than humans, they might then start working at an intellectual level that is completely incomprehensible to humans. This is bound to create utter chaos and confusion as well as huge degree of unpredictability. Clearly, a Frankenstein monster like scenario unfolding cannot be ruled out. Some time back, I tried to visualize as to how computers with more intelligence than humans can really take over the world and as to why this idea is not too far-fetched as one might think. However I shall reserve this topic to a future article where I will dwell upon it in depth.

Though alternate methods as described above for simulating the human brain in a black box fashion are being adopted, the innate thirst to demystify the inner workings of the human brain remains unquenched. Teams of multi-disciplinary researchers across the world continue to work on unravelling the mystery of the human brain. One such endeavor is the ‘Human Brain’ project that has been initiated by the European Union last year with the aim of creating a replica of the human brain by 2025 that has the vision of replicating each and every function of the human brain up to the neuron and synapse level. This is somewhat similar to the ‘Human Genome’ project that leveraged computing power to decipher the information embedded in the human DNA to a great degree of granularity. If successful this project will pave way for the exact replication of the individual human brains in a computing system. This will mean that the computing system that has the replica of an individual’s brain will “think, take decisions, feel, emote, write and speak” exactly like the individual whose brain it replicates. It will replicate not only the information in the individual’s memory but also the processing logic. Neuroscientists envisage that it will be possible to capture the information in a person’s memory as well as the processing logic in his brain. This can be done by detecting and deciphering the electrical impulses in the brain and bio-chemical messages exchanged between neurons. These will then be digitized and transferred to a computing system. This will actually mean that long after a person is physically gone from this world,  the computing system with a replica of the person’s brain will continue to think, write, speak, make decisions and display feelings and emotions exactly the same way as the person would have done had he been alive.

Let’s now return from the fantasy world and do a quick reality check!!! Human brain is an extremely intricate and supremely complex creation of God and human beings are indeed fortunate for having been blessed with this magical gift. For getting a quantitative perspective of the complexity, let's take a look at the Rat’s brain which has been completely mapped today.  Neocortex is the part of the brain which deals with the conscious activity and which has been fully mapped today in case of a Rat’s brain. Neocortex consists of 10,000 neocortical columns and each column is 2 mm tall and 0.5 mm in diameter. Each neocortical column has 10,000 neurons and [10 raised to power 8] synapses (synapses are links between neurons).  In summary we are dealing with one million neurons and a huge number of synapses. And remember we are still talking of a Rat’s brain!!! Human Brain is many times more complex and has a whopping 20 billion neurons and [10 raised to power 15] synapses.

The task ahead for the scientists is humongous. The scientists will need to understand the processing logic in each neuron and comprehend every one of the messages being exchanged between the neurons. And these messages are not limited to electrical impulses which are relatively easier to decipher but mostly consist of a humongous number of bio-chemical messages. These are result of a large number chemical and nuclear reactions that take place in the brain on a continuous basis. Every feeling, thought and emotion in the brain triggers these chemical reactions which produce bio-chemicals that are exchanged between neurons. Some experts in the field of neuroscience from across the world have written articles in journals like ‘Scientific American’, ‘Nature’ etc that the objectives of projects such as ‘Human Brain’ or other allied projects are not very realistic and are probably impossible in the time lines mentioned. Even assuming that the neuroscientists have the wherewithal to be able to do a phenomenal job on their part, there will still remain huge challenges on the computer science and systems engineering front to be able develop highly scalable and massively parallel processing architectures of this magnitude and also integrate these systems with ultra-high speed memory and massively gigantic storage.

Optimists that we humans are, let’s assume for a moment that the activities mentioned above are indeed possible and scientists will be successfully able to decipher the secrets of the human brain. The reality however is that all the above challenges I talked about are only about deciphering the secrets behind the functioning of the conscious brain which is primarily the Neocortex. As I mentioned earlier Neocortex has ~20 billion neurons and is only a part of the overall human brain which has ~100 billion neurons. The rest of the human brain (excluding Neocortex) deals with the more arcane subjects such as the unconscious and subliminal processes as well as the dreams. As you will appreciate these are subjects which are in the mystic realm and humans have extremely limited understanding of these. Therefore other areas of the human brain (excluding Neocortex) are arguably very difficult to decipher. Besides we will also have to understand: interplay between conscious, unconscious and dreams, how these 3 units interact with each other, what are the outcomes as a result of these interactions etc. Thus it is seems to be getting more and more complex and it looks as if further complexity will be unearthed as we dig deeper into the recesses of the human brain. At this point, it appeared to me as if I have now reached an inflection point in my thought process!!! Until now, we have traversed through the series of obstacles in the path of being of able to unravel the mysteries of the human brain. And for a layman like me, at this point the odds were too overwhelming to be able to believe that we can truly replicate the human brain in its entirety. However the eternal optimists that we all are, the thought of my engineering dream (to see humans completely unravelling the mysteries of human brain) failing was certainly not palatable. This led me to take refuge in my other area of passion i.e. philosophy to seek an answer to the question “whether the endeavor of replicating human brain is possible at all even at a point very far in the future”. I pondered over this question for a really long time and finally there seemed to be some light at the end of the tunnel. It seemed to me that the answer to my question could be derived from the thought that: “we need to work with higher level of intelligence to find a solution to a problem than the intelligence that created the problem in the first place”. This essentially translates to the fact that human beings will never ever be able to fully comprehend and decipher all the secrets behind the working of the human brain as they will never be more intelligent than themselves. At best humans would be able to simulate the brains of mammals with lower orders of intelligence and perhaps understand the workings of human brain to a certain extent which will however be no where even close to the complete demystification that I have been dreaming about.

To further add substance to the thought process I just walked through, let’s look at another example. When Einstein postulated the ‘Theory of relativity’, the entire world believed that man has now understood the principles that govern existence of the universe and that every action that happens in the universe can be explained using laws of physics. No one was able to find loopholes in Einstein’s theory probably because no one else operated at the intellectual level of Einstein. After some years, Einstein himself realized that his theory indeed had some limitations when it comes to the surfaces on the curvature of black holes which is called as “Singularity”. All the laws of physics seem to fail at the point of singularity including Einstein’s Theory of Relativity.  And Einstein had no answer or explanation as to why this was happening. Being not only exceptionally intelligent but also devout person as well, he knew the failings of human race and also knew the supreme power of the creator. He then simply explained the phenomena by stating that Singularity is a point in the universe where “GOD DECIDED TO DIVIDE BY ZERO”. I believe the issue is settled now and demystification of all the secrets behind the working of the human brain and creating its replica is not something that will ever happen. Though we might be able to come out with some theories which explain the workings, the fact still remains that we do not know what we do not know and thus can never claim mastery.

At the beginning of my journey, I started out on the quest to identify like-minded people using a scientific and empirical model in a programmatic manner. The path involving a combination of neuroscience and computer science appeared to be the best option at the beginning but along the journey, I however realized the pitfalls of my hypotheses. So probably the best and simplest solution is to leverage the age-old and the time-tested Psychoanalysis techniques to understand an individual’s thinking process in detail. We could then harness this information while choosing individuals in each team so that we get the best productivity and are able to choose either similar or dissimilar individuals based on the need.

The interesting by product of this journey is understanding the method in which simulation of the human brain by using a pure black box approach is possible. This is a pure technology based approach as I explained earlier and the computing system will leverage emerging technologies such as big data, advanced analytics together with Artificial Intelligence, Artificial Neural Networks, Spatial Navigation, Cognitive computing and Pattern recognition to produce same output as the brain does with a particular input. I had talked about the possibilities of more intelligent systems taking over from the human beings and ruling over us. These fears are totally unfounded as no human will ever release a computing system with more intelligence than himself or herself in a live environment. At best, these experiments would be done under controlled conditions in secure labs and will be more towards satisfying human’s intellectual appetite of being able to create computing systems that are more intelligent than himself or herself.

Having now come to the end of the journey the following is the key take-away:

1) Researchers working on simulating the human brain must limit themselves to understanding the workings of the human brain only to the extent that is realistic. The research should be such that it is useful for greater human cause. For example: To accelerate and support Disease diagnosis and finding economical cures so that patients with ailments of the brain and nervous system are immensely benefitted.

2) Artificial Intelligence and Machine Learning in the ‘Digital’ era must focus on solving real world problems and provide innovative approaches and solutions. The objective need not be to develop a system to have more intelligence than humans but to build a system that is needed for solving complex human problems and make the world a better place to live in.

3) Good old Psychology, Psychoanalysis and principles of Human Behavior/ Organization Behavior which are best practiced by humans cannot be replaced by computer systems

4) And finally, the last but certainly not the least:           “STAY CONNECTED WITH OLD FRIENDS”