Table of Contents |
---|
Key Points
...
Potential Value Opportunities
Trusted Enterprise AI & ML - Linkedin article. - Mohan Venkataraman
Trusted Enterprise AI & ML - Linkedin post
nterprises are leveraginghashtag#generativeaiandhashtag#machinelearningto gain insights, summarize vast amounts of information, generate reports (including sales forecasts, annual reports, andhashtag#ESGinitiatives), negotiate contracts, review legal documents, make predictions, and classify items.
In many instances, they take actionable steps based on the recommendations and insights provided. In the realm ofhashtag#AIandhashtag#ML, robust governance is essential to ensure responsible usage and maintain auditability, especially in cases of unexpected behavior or outcomes.
Users’ trust in AI models and the reliability of generated insights are paramount.hashtag#Blockchainsand Distributed ledger technologies (hashtag#DLTs) offer governance mechanisms that support responsiblehashtag#AIusage and facilitate model lifecycle management.
Trusted Enterprise AI & ML article
The cognitive and text-generation capabilities of Chat GPT have sparked discussions about the social impact of AI, as well as concerns related to security, privacy, and copyright issues. Responsible and ethical adoption of this powerful technology is now a priority.
Enterprises are leveraging generative AI and machine learning to gain insights, summarize vast amounts of information, generate reports (including sales forecasts, annual reports, and ESG initiatives), negotiate contracts, review legal documents, make predictions, and classify items
AI potential impacts require strong governance
Actions based on AI recommendations carry significant implications—spanning life, legal matters, social dynamics, economics, and even politics. Therefore, the use of AI and ML models must be meticulously governed, controlled, monitored, and managed.
Distributed Ledger Technologies (DLTs) can establish a reliable AI service within enterprises.
Simple AI Stack Model
Enterprise data comprises information acquired from external partners, sourced from third-party data providers, and internally generated through various applications and content creation tools. These diverse data sources are carefully curated and serve as the foundation for training models.
At the base, we find pre-trained language models (LLMs) and vector databases—either purpose-built or derived from open-source or vendor-specific solutions. These foundational resources support higher-level enterprise-specific AI and ML models. Additionally, the stack includes agents, prompt libraries, and other reasoning objects.
the trust layer, provided by blockchain or distributed ledger technology (DLT). Blockchain’s immutability, distributed ledger capabilities, and support for smart contracts make it a valuable choice. However, other DLTs, such asQLDBorFluree, can also be equally effective.
Users, Model agents, Prompts
three key actors:Users,Models (including Agents), andPrompts(both dynamic and static).
Users are clients invoking services from the AI service and interacting with it via prompts finally receiving a response
AI Models are objects that include algorithms and parameters to support different types of use cases ( recognition, categorization, generation, prediction etc )
AI agents provide interaction services between clients and models
Prompts
- Prompts serve asuser or application-provided reasoning and queries.
- They guide the system by framing questions or expressing requirements. Prompts can be eitherdynamic(generated on-the-fly) orstatic(predefined).
Some AI use cases implemented by SWT
...
ai-apps-use-cases-msft-azure-ebook-2024.pdf. link
AI apps employ machine learning to continually learn and adapt, using advanced models powered by cloud computing to optimize their results over time. The insights they provide are much more informative and actionable than their non-AI counterparts.
Compare Traditional to AI Apps
Traditional Apps | Intelligent Apps | Outcomes | |
Learning and automation | Depends on the code written by the programmer to perform a specific task | Programmed to learn to perform the task by using data, algorithms, computation, and method | Intelligent AI apps can adapt to changing situations and user preferences, while traditional apps are limited by predefined rules and logic |
Responsiveness | Can only respond to user inputs or requests | Can anticipate user needs and offer suggestions or solutions | Intelligent AI apps are proactive, making them more personalized and engaging than reactive traditional apps |
Data Capabilities | Designed only to handle certain types of data or inputs | Designed to handle various types of data or inputs and even generate new data or output | AI apps are flexible and creative, allowing users to engage beyond traditional app limitations in ways they didn’t expect |
Implementation | Typically built on a monolithic architecture and deployed on-premises | Built on the cloud using a microservices architecture | AI apps have enhanced scalability that lets them handle unlimited traffic and data |
Consulting Use Case
To maximize the collective knowledge of its consultants, Arthur D. Little created an internal solution that draws on text analytics and other AI enrichment capabilities in Azure AI services to improve indexing and deliver consolidated data insights. Using this solution, consultants have access to summaries of documents with the abstractive summarization feature in Azure AI Language. Unlike extractive summarization—which only extracts sentences with relevant information—abstractive summarization generates concise and coherent summaries, saving the consultants from scanning long documents for information.
1. Enhanced summarization capabilities speed up consultant workflows
2. Improved security and confidentiality
3. Rapid innovation for products and services
Synthesized Voice for Customer Service Use Case
TIM pioneers synthesized voice service to increase customer satisfaction
Azure AI Services
Azure provides a wide range of tools and services that support AI development:
...