Large Language Model-ChatGpt

  • Currently, significant attention is being given to Chat-GPT and other similar “giant artificial intelligences’ ‘ (gAI)such as Bard, Chinchilla, PaLM, and LaMDA.
  • Chat-GPT is an example of a large language model (LLM) which is a type of (transformer-based) neural network that is great at predicting the next word in a sequence of words.
  • Ensuring safe data- OpenAI has taken measures to ensure that the data used for training is safe and suitable for training purposes.
  • Trillion Parameters- OpenAI takes advantage of GPT-4’s large size and trillion parameters to help them reach their goal of making “artificial general intelligence that benefits all of humanity,”
  • Multiple use-There are many use-cases intended for these systems, including legal services, teaching students, generating policy suggestions and even providing scientific insights.
  • Novel Thoughts-The objective behind gAIs is to automate knowledge work, a realm that was traditionally considered beyond the scope of automation.
Understanding The Term Large language model:
  • A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.
Chat GPT-4: 
  • It is the latest GPT (Generative Pre-trained Transformer) series iteration.
  • Building on the success of its predecessors, ChatGPT-4 takes natural language processing to new heights with its ability to generate coherent and contextually relevant responses.
  • It has been trained on enormous data, enabling it to understand and generate human-like text with exceptional fluency and coherence.
 What is High Modernism?
  • The current driving philosophy of states is high modernism, a faith in order and measurable progress. It has following features:
  • Top-Down Approach: gAIs leave no room for democratic input since they are designed in a top-down manner with the premise that the model will acquire the smaller details on its own.
  • Disregarding complex human behaviour:  States seek to improve the lives of their citizens, but when they design policies from the top-down, they often reduce the richness and complexity of human experience to that which is quantifiable.
  • Neglect of local knowledge: This ideology often ignores local knowledge and lived experience, leading to disastrous consequences.
How top down planning may lead to negative impacts?
  • The case of monocrop plantations in contrast to multi-crop plantations, shows how top-down planning can fail to account for regional diversity in agriculture.
  • The consequence of that failure is the destruction of soil and livelihoods in the long-term.
  • This is the same risk now facing knowledge-work in the face of gAIs.
 Why is high modernism a problem when designing AI? 
  • Standardisation over sustainability: Such a business model tends to prioritize standardization over sustainability or craftsmanship, resulting in a homogenized market where everyone has access to cheap, mass-produced products.
  • Destruction of local shops: This often leads to the gradual demise of local small-town shops, as they struggle to compete against the convenience and widespread availability offered by online platforms.
What do giant AIs abstract away?
  • Threat to language diversity: The risk of such language loss is due to the bias induced by models trained only on the languages that already populate the Internet, with English being predominant (~60%).
  • Inherent Biases of gAI: There are other ways in which a model is likely to be biased, including on religion sex and race.
  • Unreasonable Intelligent response: LLMs are unreasonably effective at providing intelligible responses. Presenting Myopic view lacking multi-dimensionality. For Example, An atlas is a great way of seeing the whole world in snapshots. However, an atlas lacks multi-dimensionality required to capture intricate details. This knowledge is abstracted away by gAIs in favour of the atlas view of all that is present on the internet.
Case Study implying lack of multi-dimensionality in gAI
  • When inquired with ChatGPT about the drawbacks of planting eucalyptus trees in the West Medinipur district.
  • The model presented various reasons why monoculture plantations are unfavorable but failed to address the true reason behind the local opposition I.e., the reduction in available food resources caused by monoculture plantations.
  • That kind of local knowledge only comes from experience.
  • The territory can only be captured by the people doing the tasks that gAIs are trying to replace.
 Can diversity help?
  • A part of the failure to capture the territory is demonstrated in gAIs’ lack of understanding.
  • If one is cautious in their inquiries, these systems can generate remarkable responses.
  • However, posing the same question with slight variations can result in illogical answers
  • This pattern has led computer scientists to refer to these systems as “stochastic parrots,” implying that they can imitate language but exhibit random behavior.
Ways to reduce the risks posed by gAIs:
  • Promoting democratic inputs: Artificially slowing down the rate of progress in AI commercialisation to allow time for democratic inputs.
  • Development of diverse models: Ensure there are diverse models being developed. Diversity’ here implies multiple solutions to the same question, like independent cartographers preparing different atlases with different incentives: some will focus on the flora while others on the fauna.
  • Adequate time frame before final outcome: Research on diversity suggests that the more time passes before reaching a common solution, the better the outcome. A better outcome is critical when dealing with the stakes involved in artificial general intelligence – an area of study in which a third of researchers believe it can lead to a nuclear-level catastrophe.
Promising Research Directions:  
  • BLOOM, an open-source language model (LLM), has been developed by scientists using public funding and has undergone thorough filtering of the training data.
  • This model is also capable of handling multiple languages, including 10 Indian languages, and is supported by an active ethics team that regularly updates the license for its use.
Way Forward:
  • Consideration of ethical implications: While ChatGPT-4 offers tremendous potential, it is essential to consider the ethical implications and challenges associated with its deployment.
  • Ensuring transparency and accountability: As an AI language model, it reflects the biases and limitations in the data it was trained on. It is crucial to mitigate any biases and ensure transparency and accountability in its usage.
  • Responsible use: Striking a balance between innovation and responsible deployment is paramount to maximize the benefits while minimizing potential risks.
  • Human-centric approach: While ChatGPT-4 can simulate human-like interactions, it is essential to recognize its limitations and ensure that human oversight and judgement are integrated into its applications
  • OpenAI has taken steps toward addressing these concerns by emphasizing AI’s safety, robustness, and responsible use.
  • Ongoing research and collaboration with experts in various fields are essential in refining and improving models like ChatGPT-4, making them more aligned with human values and societal needs.
News Source: The Hindu
Share this with friends ->