Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

...

Potential Value Opportunities


Some AI use cases implemented by SWT

  1. GIGO2 - Garbage in Good data out engine - use calibrations to generate quality data rules from bad data on a data farm
  2. ASA - Auto Support Agent - generated automatic responses for defect ticket resolutions - moved to defect queue, provided how to example doc, provided a known fix link, queued to manager 
  3. QWF - Quick WebFacing Factory - generated advanced web pages based on data types and generation policies and the vanilla WebFacing code source
  4. CQA - Code Quality Analyst - reviewed JEE code to generate an analsysi report on the usage and quality of design patterns in an enterprise JEE insurance policy suite
  5. FVA - Fix Verification Agent - verified the target environment met the criteria to deploy a fix
  6. AMM - Automated Market Maker Agent - based on history and trading policy goals, the AAM agent recommended prices to sell vehicles for a given marging and turnaround - user decided to use or not


MIT Study for Enterprise Gen AI Use Cases - 2024

ebook_mit-cio-generative-ai-report.pdf    link

ebook_mit-cio-generative-ai-report.pdf file


By contrast, the power of generative AI tools to democratize AI—to spread it through every function of the enterprise, to support every employee, and to engage every customer —heralds an inflection point where AI can grow from a technology employed for particular use cases to one that truly defines the modern enterprise.

technical leaders will have to act decisively: embracing generative AI to seize its opportunities and avoid ceding competitive ground, while also making strategic decisions about data infrastructure, model ownership, workforce structure, and AI governance that will have long-term consequences for organizational success.

<< not different than the advice / warning to move now on all new tech fads - blockchain, analytics, APIs, etc 

support for enterprise AI investments >> 

Generative AI and LLMs are democratizing access to artificial intelligence, finally sparking the beginnings of truly enterprise-wide AI. Powered by the potential of newly emerging use cases, AI is finally moving from pilot projects and “islands of excellence” to a generalized capability integrated into the fabric of organizational workflows. Technology teams no longer have to “sell” AI to business units; there is now significant “demand pull” from the enterprise

Gen AI will help extract value from unstructured data >>

generative AI’s new ability to surface and utilize once-hidden data will power extraordinary new advances across the organization

Gen AI quality and value are dependent on the right data, quality data, the right models, verified algorithms, the right tuning parms, good calibration, good governance, good feedback >>

The generative AI era requires a data infrastructure that is flexible, scalable, and efficient - such as data lakehouses, can democratize access to data and analytics, enhance security, and combine low-cost storage with high-performance querying.

Custom LLMs, grounded custom data, custom algorithms & models, custom governance may create value and competitive advantage >>

Some organizations seek to leverage open-source technology to build their own LLMs, capitalizing on and protecting their own data and IP. CIOs are already cognizant of the limitations and risks of third-party services, including the release of sensitive intelligence and reliance on platforms they do not control or have visibility into. They also see opportunities around developing customized LLMs and realizing value from smaller models. The most successful organizations will strike the right strategic balance based on a careful calculation of risk, comparative advantage, and governance.

AI clearly has risks to manage, will reduce demand for some jobs BUT leaders expect bigger gains in value from the benefits >> 

do not expect large-scale automation threats. Instead, they believe the broader workforce will be liberated from time-consuming work to focus on higher value areas of insight, strategy, and business value.

SGS will be key to delivering high value from AI while reducing risks >> 

Unified and consistent governance are the rails on which AI can speed forward. Generative AI brings commercial and societal risks, including protecting commercially sensitive IP, copyright infringement, unreliable or unexplainable results, and toxic content. To innovate quickly without breaking things or getting ahead of regulatory changes, diligent CIOs must address the unique governance challenges of generative AI, investing in technology, processes, and institutional structures.

How SGS provides STEAR governance capabilities for AI >>

Responsible AI and AI Governance#SGS-delivers-STEAR-governance-capabilties---Jim-Mason


Image Added


Image Added


2022 Adoption rates by industry

Image Added

key>> providing broad access to Gen AI toolsets, quality discoverable data and automated governance - SGS >>

key>>. data lakehouse provides flexibility, scale of data lake with quality and governance services common in data warehouse for trusted data sources >>

The lakehouse combines the best of both, offering an open architecture that combines the flexibility and scale of data lakes with the management and data quality of warehouses.

VA - process data at the source to reduce risks

For the VA, the lakehouse is appealing because it minimizes the need to move data, which creates privacy and governance risks. “We face a significant challenge in data movement, which is why we are dedicating substantial resources and expertise to address it,” says Schaefer.

Shell - enterprise integrated data layer << like PTP

The lakehouse abstracts complexity in a way that allows users to perform advanced activities regardless of technical competency. Shell has built an “enterprise layer” that allows users to interact dynamically. “Previously, you had to go to data stores, extract the data, cleanse it, and do multiple transform activities,” says O’Connell.

DuPont ( and airlines ) - predictive MTTF 


MIT survey shows the criticality of data for AI value in business

Image Added


Potential for Genai LLM bias VS focused FLMs

Inaccurate and unreliable outputs are a further worry. The largest LLMs are, by dint of their size, tainted by false information online. That lends strength to the argument for more focused approaches, according to Matei Zaharia, cofounder and chief technology officer at Databricks a


Key AI questions

who owns the data?

who is in control of decisions when AI is used?  person?  AI agent ?  both ? shared ? delegated ?

How is the data provided to Genai for your use cases ?  ( via LLM  via grounded enterprise data or ? )

who owns the Gen AI engine ( model, algorithms )?

how is IP protected for producers, conumers by service with Genai?

how is the output go Genai calibrated?  adjusted ?

what are the rights to the Genai content?

how are data and content rights tracked ?  ( tokens ? VCs ?  consents ? )

how are the uses of Genai content tracked by first client ?  downstream ?

what is our data governance program effectiveness ?  efficiency ?

SGS & STEAR > are our current governance solutions adequate to meet the needs for all our data and AI use case governance scenarios - real-time verifications and analytics?

“We are seeing the need to have very integrated governance models, integrated governance structures for all data and all models.” Richard Spencer Schaefer, Chief Health Informatics Officer, Kansas City VA Medical Center

what controls do we need on use of 3rd party AI models, engines, services, content, data ?

how do validate Genai output meets our acceptance criteria ?

how is our data lakehouse strategy defined ?

what is the performance of our data lakehouse compared to our OKRs? KPIs?

how can our devops automation and quality be improved for AI and analytics use cases?

BZT replaces ZT << provider services offer automated self governance and health checks

How can trusts be improved for the data, the engines, the models, the algorithms, the parameters, the controls, the value ?

How can the value of the data, the engines, the models, the algorithms, the parameters, the controls be measured for specific use cases ?

SLT replaces blockchain << the blockchain model had to be rebuilt to open the chain to an integrated lakehouse, improve replay, provide better verifications, define managed and unmanaged KDEs for enterprise and user data models etc

BTOIP builds more trusts for use cases on top of the excellent TOIP trust and governance services architecture

How do we implement the 3B strategy for AI - borrow, buy, build ( or extend ) ?

Which has higher TTV ( time to value ) for operations and quality services ?  commercial or enterprise open-source software ?

Which has higher operations and quality risks ?  commercial or enterprise open-source software ?

Which has higher operations and quality costs ?  commercial or enterprise open-source software ?

Which has better open standards support ?  commercial or enterprise open-source software ?

Which software has better community support ?  commercial or enterprise open-source software ?

Which software has lower soution migration capability?  commercial or enterprise open-source software ?

Which software has better security compliance ?  commercial or enterprise open-source software ?

Which software has better performance compliance ?  commercial or enterprise open-source software ?




AlphaCodium-ai-generate-activity-test-2024-gpt4

...


AI apps employ machine learning to continually learn and adapt, using advanced models powered by cloud computing to optimize their results over time. The insights they provide are much more informative and actionable than their non-AI counterparts.

Compare Traditional to AI Apps



Traditional AppsIntelligent AppsOutcomes
Learning and automationDepends on the code written by the programmer to perform a specific taskProgrammed to learn to perform the task by using data, algorithms, computation, and methodIntelligent AI apps can adapt to changing situations and user preferences, while traditional apps are limited by predefined rules and logic




ResponsivenessCan only respond to user inputs or requestsCan anticipate user needs and offer suggestions or solutionsIntelligent AI apps are proactive, making them more personalized and engaging than reactive traditional apps




Data CapabilitiesDesigned only to handle certain types of data or inputsDesigned to handle various types of data or inputs and even generate new data or outputAI apps are flexible and creative, allowing users to engage beyond traditional app limitations in ways they didn’t expect




ImplementationTypically built on a monolithic architecture and deployed on-premisesBuilt on the cloud using a microservices architecture

AI apps have enhanced scalability that lets them handle unlimited traffic and data



Consulting Use Case

To maximize the collective knowledge of its consultants, Arthur D. Little created an internal solution that draws on text analytics and other AI enrichment capabilities in Azure AI services to improve indexing and deliver consolidated data insights. Using this solution, consultants have access to summaries of documents with the abstractive summarization feature in Azure AI Language. Unlike extractive summarization—which only extracts sentences with relevant information—abstractive summarization generates concise and coherent summaries, saving the consultants from scanning long documents for information.

1. Enhanced summarization capabilities speed up consultant workflows

2. Improved security and confidentiality

3. Rapid innovation for products and services


Synthesized Voice for Customer Service Use Case

TIM pioneers synthesized voice service to increase customer satisfaction




Azure AI Services

Azure provides a wide range of tools and services that support AI development:

...