From Coffee breaks to code break
Some find it scary, many find it useful and others simply used it to finish their homework – Artificial Intelligence or AI is one of the most transformative technologies of our time.
And for better or worse, its here to stay – so whether you are student, a professional or just someone who is curious about this fascinating technology, its time to understand and see what the machines of past, present and future have in store.
How AI has transitioned from being a theoretical concept to a practical tool, mirroring the agile journey of software development.
Not so long ago, the average person’s perception of AI came mostly from the realm of science fiction,
where robots would become sentient beings and develop human like thoughts and emotions.
THE PAST:
AI:“Any tsk performed by a program or a machine that, if a human carried out the same activity, we would say human had to apply intelligence to accomplish the task”.
1950-1960
The term AI was first introduces by John McCarthy back in 1950s . Marvin Minsky: Focuses on machines building internal models of their environment and Claude Shannon’s work emphasized the importance of mathematical models in understanding complex system.
The Dartmouth conference proposal outlined ambitious goals for the development of artificial intelligence. Exploring the possibility of machines using language, forming abstract Concepts, solving complex problems and self-improvement
Prediction: Create machines capable of intelligent Behavior comparable to that of humans
Alan Turing: the man who helped to crack Enigma code during World war II play crucial role for the foundation of theoretical AI (explained in the imitation game). The Enigma Code was a complex Cypher used by the Germans to encrypt their military communications.
His groundbreaking work on decrypting it, not only gave the allies a significant advantage in the war, but also paved the way for the development of modern computing and AI.
Researchers developed a series of problem solving programs. These programs allowed computers to be taught strategies for playing the game of Checkers, as well as solving word problem and algebra, proving logical theorems and speaking the english language. With the advancement of technology and a better understanding of human brain, the capabilities of AI has drastically changed over the years.
GOLEM concept for the need of powerful mechanical man resembles the idea of creating intelligent entities.
1960-1970
Researchers primarily concentrated on the development of neural networks. Delved into the cognitive process that underlie human thought and explored the intriguing possibility of autonomous self thinking machines.
This involved representing knowledge and reasoning using symbols and logic. The development of early programming languages like LISP played significant role on this phase. Aimed to create expert systems, that could mimic human decision making in specialized domains.
Milestone: ELYZA: First ever chatbot.
1970-1980: AI Winter
High Expectation, coupled with the limitations in computing power and the complexity of AI problems, resulted in decreased funding and interest in AI research. Many projects were abandoned due to unrealized goals. Due to the limitations of symbolic AI, throughout the following decades, researchers began exploring connectionist models inspired by the human brain’s neural networks. And that sparked the interest in neural networks and machine learning.
Milestone: IBM’s Deep Blue’s victory over Garry Kasparov on 1997
Millennium
With millennium and increase in computing power and the availability of vast amount of data, AI research had a new lease of life. Algorithms like Convolutional neural networks CNNs and recurrent neural networks RNNs reshowed remarkable success in image and Speech recognition, language processing and more. Until 2010, the machines were being developed that could learn from data, identify patterns, and make decisions with minimal human intervention
THE PRESENT
Deep Learning – encourages machines to think more creatively and take the initiatives
Narrow AI – Google Search , Netflix sort, Alexa
Transformers, on the other hand, can track the relationships in sequential data, and learn the context, resulting in much more accurate translations. (google translate has become more accurate)
BERT and GPT emerged
TayTweets – Chatbot – Failure
THE FUTURE
The future of artificial intelligence holds immense potential to transform various aspects of society, from healthcare and education to business and governance.
Emerging Trends in AI:
- Advanced Machine Learning Algorithms
- AI and Edge Computing
- Explainable AI (XAI)
- AI in Personalization
- Ethical and Responsible AI
Challenges in the Future of AI:
- Data Privacy and Security
- Bias and Fairness
- Regulation and Governance
- Job Displacement and Workforce Transition
- Energy Consumption and Environmental Impact
Agile in AI Development
How AI development aligns with Agile principles:
Iterative Learning: AI models improve over time with more data and feedback.
Collaboration: Cross-functional teams (data scientists, engineers, domain experts) work together.
Adaptability: AI systems must adapt to new data and changing environments.