AI, Generative AI and You

 


A trip down memory lane before we get started. In 1936 Alan Turing introduced the concept of a universal machine (Turing Machine) that could simulate any algorithmic process. In 1956 John McCarthy introduced the term Artificial Intelligence as we know it. In the 1970's there were some early research and optimism around this but nothing concrete. The 1990's is when AI research shifted focus towards machine learning and statistical approaches. For example in 1997 IBM's Deep Blue defeated world chess champion Garry Kasparov. This was indeed one of the big breakthroughs that stunned the world at that time. There was also good progress on development of algorithms like decision trees, and early neural network models.

In the 2000's there were big advancements in Big Data and also more impactful progress on neural networks. In 2014 another breakthrough moment occurred when Google's AlphaGo defeated professional Go players, showcasing advanced AI capabilities in complex tasks. We also started to learn that AI can perform better in specific tasks like Image and Speech recognition. In 2015 OpenAI was founded with the mission to ensure that artificial general intelligence (AGI) benefits all of humanity. In 2016 one of the biggest upsets came when Google's AlphaGo defeated world champion Go player Lee Sedol, a milestone in AI history.

Now let's look at how AI came to be as disruptive as we know it today. The breakthrough moment occurred in 2017 when "Attention Is All You Need" is a landmark paper published by researchers at Google introduced the Transformer model, which has since become a foundational architecture in natural language processing (NLP) and other fields.The Transformer achieved state-of-the-art results on various NLP tasks, such as translation and language modeling.

2018 is when OpenAI released the first Generative Pre-trained Transformer (GPT) model. 2019 is when GPT-2 came through with a step forward, featuring 1.5 billion parameters.Of course 2020-2022 is when OpenAI developed ChatGPT, a version of GPT-3 fine-tuned specifically for conversational use. This model was trained to be capable of engaging in dialogue, answering questions, providing recommendations, and giving content that is usable. OpenAI released GPT-4, an even more advanced version of the GPT series in 2023. It is important how we came here before we learn more about Generative AI.

The difference between AI and Generative AI is the latter can be used to generate content which is the main use of this and helps humans in making sense of things. The other key is it is trained on vast amounts of data that is unfathomable to the human brain. This can be both scary and exciting at the same time with more disruption coming our way.

Of course apart from ChatGPT we have Gemini, Claude and many more equally valuable tools. The speed of progress is breathtaking with something new coming every day. Claude 3.5 Sonnet is the latest and is considered the most powerful vision model on standard vision benchmarks. It excels in visual reasoning tasks, such as interpreting charts and graphs, and accurately transcribes text from imperfect images.

Now that the context has been set let me dive into one of the key concepts you need to understand on Generative AI which is Large Language Models also known as LLM's. Large Language Models (LLMs) are a class of AI models designed to understand and generate human language with a high degree of accuracy. The key practical use of this is LLMs are first trained on a large corpus of text data to learn general language patterns. This phase involves predicting the next word in a sentence, masked language modeling, or other self-supervised tasks. This is very much like auto promotions which finishes sentences when we type something. The key is to keep expanding on what is being asked and filling in the blanks so to speak. Large Language Models represent a significant advancement in AI, offering powerful tools for understanding and generating human language.

Prompt Engineering is another key concept to understand when it comes to Generative AI. Prompt engineering is the practice of designing and refining input prompts to elicit the desired output from AI models, particularly large language models (LLMs) like GPT-3 and GPT-4. The Key success criteria for this is to make the prompts clear and detailed. Prompts should also provide sufficient context in what you are trying to achieve. If you want even better results then provide examples along with the prompt and the results will be better.

Two other key concepts to know in Generative AI is Supervised and Unsupervised Learning. Supervised learning involves training a model on a labeled dataset, where each input is paired with an output label. The model learns to map inputs to outputs based on this training data. Unsupervised learning involves training a model on a dataset without labeled outputs. The model tries to learn the underlying structure or distribution in the data.

I also want to mention the key concepts of Machine Learning and Deep learning. Machine learning is a branch of AI focused on building algorithms that enable computers to learn from and make predictions or decisions based on data. Deep learning is a subset of machine learning that uses neural networks with many layers (deep neural networks) to model complex patterns in large datasets.

The above mentioned concepts are the key to understand Generative AI as you get started on this journey of learning which is never ending. Let me also bring what the impact of AI could be in a couple of areas like Software development and Software testing.

The impact on Software development is all about enhancing productivity and efficiency. For example AI powered tools like Codex can automatically generate Code Snippets. This speeds up the coding process considerably. The other key is automated code reviews performed by AI for identifying potential bugs and system vulnerabilities. They can help developers refactor code for better performance and maintainability.

The impact on Software testing is around automated testing which has always been one of the keys to a lot of DevOps processes. AI can create and execute test cases automatically, improving test coverage and identifying bugs more efficiently. They can also be used for predictive bug detection using machine learning models. Again this helps in the Shift Left cycle and bugs are detected earlier in the life cycle. AI can automate the build and deployment process, ensuring continuous integration and continuous deployment pipelines are efficient and error-free.

These are just some of the things that are going to be disrupted by AI practically for people in the IT industry. Of course all of this can never replace humans but it can enable us to engage more in Big picture thinking and bringing in new business developments to improve products.

Finally no discussion of AI can be complete without mentioning the ethical considerations. The key concern is AI can introduce bias based on the data it is trained on. This is also resulting in hallucinating of data. We also need to ensure transparency in AI algorithms to build trust and reliability in AI development. Of course any AI tool should ensure data security and privacy to prevent misuse of sensitive information.

AI is a complex field but understanding the concepts can help us navigate this landscape much better as we look forward to an uncertain but better future for all of us. One of the books I liked on AI is AI Superpowers by Kai-Fu Lee – This also speaks about how far China has gone to be a leader in the race of AI. There are four types of AI Internet AI, Business AI, Perception AI and Autonomous AI. There is also a section on what jobs will be impacted by AI.

if you also want to know how humans can find meaning if Robots take over here is my article. The key question is even if universal basic income is provided and we don't need to work for money what will humans do to find meaning. I tried to answer this question with this article. https://www.linkedin.com/pulse/what-humans-should-do-find-meaning-when-robots-take-shyam/

The way forward for us is to understand that the one thing that will never be obsolete is to keep learning no matter the disruption. Richard Fyneman's learning method is one of the best. It consists of four steps.

Study - This is very critical as you can't learn everything. Decide what you want to study.

Teach - This is an excellent suggestion as we need the curiosity to learn and we do internalize the subject if we have to teach to someone.

Identify gaps - The gaps of what you know and what you need to know is the key to a great learning environment.

Simplify - Complexity has escalated and those who have the ability to simplify complex subjects have an edge.

Keep following the above as we all need to be learning big to achieve more in the era of deep disruption. The point is to master the key skills that will stand you in good stead going forward. In 21 lessons for the 21st Century by Yuval Noah Harari says many pedagogical experts argue that schools should move to teaching the four C's - critical thinking, communication, collaboration and creativity. As you can see thinking about how to attain mastery is the key moving forward.

Wish you a great journey learning going forward. The views expressed here are my own and do not represent my organization.



Comments

Popular posts from this blog

10 Tips to Develop a Pleasing Personality

12 Guidelines to Effective Communication

The 5 P's of Ethical Power

Life is like a Test Match in Cricket

10 Qualities of a True Champion

Mastery by Robert Greene - An Inspirational Book

7 Inspiring Lessons from Elon Musk

10 Keys to Personal Growth

Never ending journey of Success and Goals

Talent is Never Enough - 13 Factors to Maximise your Talent