Causal reasoning is pivotal in developing Artificial General Intelligence (AGI); however, current Large Language Models (LLMs) predominantly grasp statistical correlations rather than a deep understanding of cause-and-effect relationships. Judea Pearl’s influential framework on causality, termed the “Ladder of Causality,” encompasses three levels: statistical observation (seeing), intervention (doing), and counterfactual reasoning (imagining). While LLMs excel at statistical associational reasoning due to their extensive training datasets, achieving more capable and trustworthy LLM systems requires surpassing the correlation level.

LLMs can be effectively utilized alongside existing causal methods as a proxy for human domain knowledge, reducing the human effort required for setting up causal analyses, a significant barrier to widespread adoption. It’s important to note that this doesn’t imply the spontaneous emergence of complex causal reasoning in LLMs. A promising direction for enterprise-LLM applications involves the synergy between structured knowledge graphs, vector searches over textual corpus, and contextual LLMs. Knowledge graphs provide explainable structured knowledge, vector search enables fast, relevant content retrieval, and LLMs contribute to contextual natural language generation.

Generative AI, including LLMs, must conceptually comply with transparency requirements on two fronts: regulatory agencies (governmental and state bodies and domain regulators such as finance) and GenAI LLM-enterprise applications in a business context.

The original proposal from the European Commission for the AI Act did not include references to generative AI and foundation models.
When initially proposed in April 2021 to establish harmonized EU-wide rules for AI, the draft law might have seemed appropriate for the state of the art, but it did not anticipate OpenAI’s release of ChatGPT.
The Council of the EU approved a last-minute change to the AI Act version on December 6, 2022, to include a title on general-purpose AI systems. However, the European Union’s draft AI Act already requires revision to account for the opportunities and harms of generative AI.

A robotic hand is depicted drawing a causal loop diagram on a transparent digital surface, symbolizing the concept of causality within artificial inte

For now, the European Parliament has proposed that providers of LLM foundation models perform essential due diligence on their offerings. This includes three key requirements: risk identification, testing for appropriate performance levels, and documentation on the training process, including RLHF and intelligible usage instructions. However, the proposed AI Act does not align well with LLMs and foundation models, as it is structured around the idea that each AI application can be allocated to a risk category based on its intended use.

Concerning LLM-enterprise AI Generative applications in a business setting, it is imperative to ensure the secure usage of ChatGPT, protecting sensitive information and preventing data leakage.
This involves limiting access to the LLM, restricting certain input types, and detecting responses with biased, discriminatory, or hateful language.

The AI landscape is evolving rapidly, making AI a fast-moving field.

Explore more