u AI use cases: software

Key Points

  1. AI fits many use cases in software today
  2. code assist
  3. testing
  4. requirements automation
  5. RPA
  6. deployments


References

Reference_description_with_linked_URLs_____________________NOtes___________________________________________________________






CEO Guide to Gen AI use cases 2024


Codegen IDE
















Codegen Tools




AlphaCodium-ai-generate-activity-test-2024-gpt4












Key Concepts


a>>> define AI use cases with examples for each AI type - use multiple industries


ceo-guide-to-genai-use-cases-2024.pdf.  file

Introduction Leadership can’t be automated 3

Section 1 AI-enabled people 7 Chapter 1 Talent and skills 9 Chapter 2 Customer service 19 Chapter 3 Customer and employee experience 29

Section 2 AI-powered data and technology 39 Chapter 4 Platforms, data, and governance 41 Chapter 5 Open innovation and ecosystems 51 Chapter 6 Application modernization 61 Chapter 7 Responsible AI and ethics 71 Chapter 8 Tech spend 81

Section 3 AI-fueled operations 91 Chapter 9 Supply chain 93 Chapter 10 Marketing 103 Chapter 11 Cybersecurity 113 Chapter 12 Sustainability 123

Conclusion Lay the groundwork for greatness 133


IBM > Generative AI won’t replace people, but people who use generative AI will replace people who don’t

IBM assumes people centric vs value centric process models


Genai is only type of use case that AI and ML apply to

see AI use cases for: optimization, prediction, pattern recognition, anomaly detection, responsible behavior ( digital twins ), content generation for audiences, codegen, testgen, datagen, trustgen, decisions, governance


IBM > With generative AI, organizations can get the best of both worlds—automation + humanity.



Combining technologies to meet use cases:  AI, DLT, Web3, Digital Twins, Quantum, Chaos, Automation, SXM ( Smart X Management = Services, Trust, Ledger, Data, Quality, Governance, Analytics, Choices, Predictions ), 


WHO is your customer now?  Purchasing Mgr or Solution Designer?  Consumer or PCA ( Personal Consumer Assistant )


IBM focuses on customer service to people now w GenAI - part of today's needs



What does the customer want?  Difference between Expectations, Value and Consequences Challenge


How do different customer groups, use cases and scenarios learn?


IBM > Assumption > Value increases when technology meets design. Value explodes when generative AI meets experience.


Reality > Value increases when it's realized ( FACTUR3DT.io ) by setting, meeting value criteria and metrics for a group and use case scenario


What is the road to success?  VCRS > Value, Costs, Risks, Success Keys


IBM > success comes from reducing frictions for consumers, workers




IBM > Assumption >  ethical journeys that build customer confidence.


Reality > different groups have different definitions of ethics 



IBM > Generative AI is disrupting the disruptors—and platform-based businesses have the edge < Agree on Time to Value for Platforms


Reality > ML models and platforms perfrom well when the data is automatically grounded, tested AND verifiable > SLT and Data Governance are Critical



Challenge: How to Mature Value Delivery capability for a community and set of use cases effectively?  


IBM > key > Ecosystem partnerships, where solution and service providers combine their skills and capabilities to deliver strategic outcomes,


Reality > VCE ( Value Chain Economy ) succeeds with open engineering where all stakeholders benefit (. Crew sport is the model )

 


Reality > successful VCE driven by open community strategy with tactical adjustments > align your resources to the community VCE model


IBM > model > With generative AI, technology drives innovation—and the business propels the technology.


Reality > Continuous business innovation for VCE use cases drives all value creation, innovation



Reality > key > how can we future proof VCE for easy, continuous improvement?


IBM > key > build foundational models that will give you such a network effect advantage


Reality > key > design VCE for network effect growing the community for faster success


IBM > assumption > Human values are at the heart of responsible AI.


Reality > key > design VCE find acceptable value goals for the community: understand expectations > id consequences > agree on value set



IBM > spend smart on AI 

Reality > engage business stakeholders up front for Opportunity Assessments with ISR > use VCRS to measure value, costs, risks


AI Operations

IBM > Supply chain automation just got an upgrade <. Reality - Agreed but the other SXMs are key to VCE value


Reality > the tools are different but the VCE model and operations metrics are not  < KSG network effect 15 years before Amazon < demand forecasting trends



IBM > AI can improve marketing personalization, responsiveness and automation < Agreed



IBM > Generative AI amplifies risk— and resilience for security

Reality > Any Trust domain has risks.  AI can be both an offensive and defensive weapon to eliminate threats but needs verifiable data always - https://trust.mit.edu/



IBM > Generative AI can help scale sustainability— ushering in a new era of responsible growth


Reality > need better transparency and governance effectively on the values, policies driving Gen AI solutions in sustainability as well as the processes and outcomes




IBM > SIMPLE THINKING EXAMPLE > What has got to happen over the next 30 years is all of the primary gas and petroleum has got to be removed from [the] mix. At the same time, you’ve got to massively ramp up electricity production. Right now, some of the big bottlenecks are areas where AI can help.


Reality > SMART THINKING EXAMPLE > prioritize both solutions and threats based on their value, impacts short-term and long-term > simple "religious beliefs on the environment are dangerous"




Potential Value Opportunities



Trusted Enterprise AI & ML - Linkedin article.  - Mohan Venkataraman

Thought Leader, Speaker, Principal Consultant [Strategy, Supply Chain, Healthcare, AIML, Web3.0, Metaverse, Blockchain, IoT]


Trusted Enterprise AI & ML - Linkedin post

nterprises are leveraging hashtag#generativeai and hashtag#machinelearning to gain insights, summarize vast amounts of information, generate reports (including sales forecasts, annual reports, and hashtag#ESG initiatives), negotiate contracts, review legal documents, make predictions, and classify items.

In many instances, they take actionable steps based on the recommendations and insights provided. In the realm of hashtag#AI and hashtag#ML, robust governance is essential to ensure responsible usage and maintain auditability, especially in cases of unexpected behavior or outcomes.

Users’ trust in AI models and the reliability of generated insights are paramount. hashtag#Blockchains and Distributed ledger technologies (hashtag#DLTs) offer governance mechanisms that support responsible hashtag#AI usage and facilitate model lifecycle management.

Trusted Enterprise AI & ML article

The cognitive and text-generation capabilities of Chat GPT have sparked discussions about the social impact of AI, as well as concerns related to security, privacy, and copyright issues. Responsible and ethical adoption of this powerful technology is now a priority.

Enterprises are leveraging generative AI and machine learning to gain insights, summarize vast amounts of information, generate reports (including sales forecasts, annual reports, and ESG initiatives), negotiate contracts, review legal documents, make predictions, and classify items

AI potential impacts require strong governance

Actions based on AI recommendations carry significant implications—spanning life, legal matters, social dynamics, economics, and even politics. Therefore, the use of AI and ML models must be meticulously governed, controlled, monitored, and managed.

Distributed Ledger Technologies (DLTs) can establish a reliable AI service within enterprises.

Simple AI Stack Model

Enterprise data comprises information acquired from external partners, sourced from third-party data providers, and internally generated through various applications and content creation tools. These diverse data sources are carefully curated and serve as the foundation for training models.

At the base, we find pre-trained language models (LLMs) and vector databases—either purpose-built or derived from open-source or vendor-specific solutions. These foundational resources support higher-level enterprise-specific AI and ML models. Additionally, the stack includes agents, prompt libraries, and other reasoning objects.

the trust layer, provided by blockchain or distributed ledger technology (DLT). Blockchain’s immutability, distributed ledger capabilities, and support for smart contracts make it a valuable choice. However, other DLTs, such as QLDB or Fluree, can also be equally effective.

Users, Model agents, Prompts

three key actors: Users, Models (including Agents), and Prompts (both dynamic and static).

Users are clients invoking services from the AI service and interacting with it via prompts finally receiving a response

AI Models are objects that include algorithms and parameters to support different types of use cases ( recognition, categorization, generation, prediction etc )

AI agents provide interaction services between clients and models

Prompts

  • Prompts serve as user or application-provided reasoning and queries.
  • They guide the system by framing questions or expressing requirements. Prompts can be either dynamic (generated on-the-fly) or static (predefined).



Some AI use cases implemented by SWT

  1. GIGO2 - Garbage in Good data out engine - use calibrations to generate quality data rules from bad data on a data farm
  2. ASA - Auto Support Agent - generated automatic responses for defect ticket resolutions - moved to defect queue, provided how to example doc, provided a known fix link, queued to manager 
  3. QWF - Quick WebFacing Factory - generated advanced web pages based on data types and generation policies and the vanilla WebFacing code source
  4. CQA - Code Quality Analyst - reviewed JEE code to generate an analsysi report on the usage and quality of design patterns in an enterprise JEE insurance policy suite
  5. FVA - Fix Verification Agent - verified the target environment met the criteria to deploy a fix
  6. AMM - Automated Market Maker Agent - based on history and trading policy goals, the AAM agent recommended prices to sell vehicles for a given marging and turnaround - user decided to use or not


MIT Study for Enterprise Gen AI Use Cases - 2024

ebook_mit-cio-generative-ai-report.pdf    link

ebook_mit-cio-generative-ai-report.pdf file


By contrast, the power of generative AI tools to democratize AI—to spread it through every function of the enterprise, to support every employee, and to engage every customer —heralds an inflection point where AI can grow from a technology employed for particular use cases to one that truly defines the modern enterprise.

technical leaders will have to act decisively: embracing generative AI to seize its opportunities and avoid ceding competitive ground, while also making strategic decisions about data infrastructure, model ownership, workforce structure, and AI governance that will have long-term consequences for organizational success.

<< not different than the advice / warning to move now on all new tech fads - blockchain, analytics, APIs, etc 

support for enterprise AI investments >> 

Generative AI and LLMs are democratizing access to artificial intelligence, finally sparking the beginnings of truly enterprise-wide AI. Powered by the potential of newly emerging use cases, AI is finally moving from pilot projects and “islands of excellence” to a generalized capability integrated into the fabric of organizational workflows. Technology teams no longer have to “sell” AI to business units; there is now significant “demand pull” from the enterprise

Gen AI will help extract value from unstructured data >>

generative AI’s new ability to surface and utilize once-hidden data will power extraordinary new advances across the organization

Gen AI quality and value are dependent on the right data, quality data, the right models, verified algorithms, the right tuning parms, good calibration, good governance, good feedback >>

The generative AI era requires a data infrastructure that is flexible, scalable, and efficient - such as data lakehouses, can democratize access to data and analytics, enhance security, and combine low-cost storage with high-performance querying.

Custom LLMs, grounded custom data, custom algorithms & models, custom governance may create value and competitive advantage >>

Some organizations seek to leverage open-source technology to build their own LLMs, capitalizing on and protecting their own data and IP. CIOs are already cognizant of the limitations and risks of third-party services, including the release of sensitive intelligence and reliance on platforms they do not control or have visibility into. They also see opportunities around developing customized LLMs and realizing value from smaller models. The most successful organizations will strike the right strategic balance based on a careful calculation of risk, comparative advantage, and governance.

AI clearly has risks to manage, will reduce demand for some jobs BUT leaders expect bigger gains in value from the benefits >> 

do not expect large-scale automation threats. Instead, they believe the broader workforce will be liberated from time-consuming work to focus on higher value areas of insight, strategy, and business value.

SGS will be key to delivering high value from AI while reducing risks >> 

Unified and consistent governance are the rails on which AI can speed forward. Generative AI brings commercial and societal risks, including protecting commercially sensitive IP, copyright infringement, unreliable or unexplainable results, and toxic content. To innovate quickly without breaking things or getting ahead of regulatory changes, diligent CIOs must address the unique governance challenges of generative AI, investing in technology, processes, and institutional structures.




2022 Adoption rates by industry

key>> providing broad access to Gen AI toolsets, quality discoverable data and automated governance - SGS >>

key>>. data lakehouse provides flexibility, scale of data lake with quality and governance services common in data warehouse for trusted data sources >>

The lakehouse combines the best of both, offering an open architecture that combines the flexibility and scale of data lakes with the management and data quality of warehouses.

VA - process data at the source to reduce risks

For the VA, the lakehouse is appealing because it minimizes the need to move data, which creates privacy and governance risks. “We face a significant challenge in data movement, which is why we are dedicating substantial resources and expertise to address it,” says Schaefer.

Shell - enterprise integrated data layer << like PTP

The lakehouse abstracts complexity in a way that allows users to perform advanced activities regardless of technical competency. Shell has built an “enterprise layer” that allows users to interact dynamically. “Previously, you had to go to data stores, extract the data, cleanse it, and do multiple transform activities,” says O’Connell.

DuPont ( and airlines ) - predictive MTTF 


MIT survey shows the criticality of data for AI value in business


Potential for Genai LLM bias VS focused FLMs

Inaccurate and unreliable outputs are a further worry. The largest LLMs are, by dint of their size, tainted by false information online. That lends strength to the argument for more focused approaches, according to Matei Zaharia, cofounder and chief technology officer at Databricks a


Key AI questions

who owns the data?

who is in control of decisions when AI is used?  person?  AI agent ?  both ? shared ? delegated ?

How is the data provided to Genai for your use cases ?  ( via LLM  via grounded enterprise data or ? )

who owns the Gen AI engine ( model, algorithms )?

how is IP protected for producers, conumers by service with Genai?

how is the output go Genai calibrated?  adjusted ?

what are the rights to the Genai content?

how are data and content rights tracked ?  ( tokens ? VCs ?  consents ? )

how are the uses of Genai content tracked by first client ?  downstream ?

what is our data governance program effectiveness ?  efficiency ?

SGS & STEAR > are our current governance solutions adequate to meet the needs for all our data and AI use case governance scenarios - real-time verifications and analytics?

“We are seeing the need to have very integrated governance models, integrated governance structures for all data and all models.” Richard Spencer Schaefer, Chief Health Informatics Officer, Kansas City VA Medical Center

what controls do we need on use of 3rd party AI models, engines, services, content, data ?

how do validate Genai output meets our acceptance criteria ?

how is our data lakehouse strategy defined ?

what is the performance of our data lakehouse compared to our OKRs? KPIs?

how can our devops automation and quality be improved for AI and analytics use cases?

BZT replaces ZT << provider services offer automated self governance and health checks

How can trusts be improved for the data, the engines, the models, the algorithms, the parameters, the controls, the value ?

How can the value of the data, the engines, the models, the algorithms, the parameters, the controls be measured for specific use cases ?

SLT replaces blockchain << the blockchain model had to be rebuilt to open the chain to an integrated lakehouse, improve replay, provide better verifications, define managed and unmanaged KDEs for enterprise and user data models etc

BTOIP builds more trusts for use cases on top of the excellent TOIP trust and governance services architecture

How do we implement the 3B strategy for AI - borrow, buy, build ( or extend ) ?

Which has higher TTV ( time to value ) for operations and quality services ?  commercial or enterprise open-source software ?

Which has higher operations and quality risks ?  commercial or enterprise open-source software ?

Which has higher operations and quality costs ?  commercial or enterprise open-source software ?

Which has better open standards support ?  commercial or enterprise open-source software ?

Which software has better community support ?  commercial or enterprise open-source software ?

Which software has lower soution migration capability?  commercial or enterprise open-source software ?

Which software has better security compliance ?  commercial or enterprise open-source software ?

Which software has better performance compliance ?  commercial or enterprise open-source software ?


How SGS provides STEAR governance capabilities for AI >>

Responsible AI and AI Governance#SGS-delivers-STEAR-governance-capabilties---Jim-Mason



AlphaCodium-ai-generate-activity-test-2024-gpt4


There's a new open-source, state-of-the-art code generation tool. It's a new approach that improves the performance of Large Language Models generating code.



The paper's authors call the process "AlphaCodium" and tested it on the CodeContests dataset, which contains around 10,000 competitive programming problems.



The results put AlphaCodium as the best approach to generate code we've seen. It beats DeepMind's AlphaCode and their new AlphaCode2 without needing to fine-tune a model!



I'm linking to the paper, the GitHub repository, and a blog post below, but let me give you a 10-second summary of how the process works:



Instead of using a single prompt to solve problems, AlphaCodium relies on an iterative process that repeatedly runs and fixes the generated code using the testing data.



1. The first step is to have the model reason about the problem. They describe it using bullet points and focus on the goal, inputs, outputs, rules, constraints, and any other relevant details.



2. Then, they make the model reason about the public tests and come up with an explanation of why the input leads to that particular output.



3. The model generates two to three potential solutions in text and ranks them in terms of correctness, simplicity, and robustness.



4. Then, it generates more diverse tests for the problem, covering cases not part of the original public tests.



5. Iteratively, pick a solution, generate the code, and run it on a few test cases. If the tests fail, improve the code and repeat the process until the code passes every test.



There's a lot more information in the paper and the blog post. Here are the links:



• Paper: https://lnkd.in/g9zkc_AK

• Blog: https://lnkd.in/g_wx88xj

• Code: https://lnkd.in/gJAxtgzn

https://github.com/Codium-ai/AlphaCodium



I attached an image comparing AlphaCodium with direct prompting using different models.

Image preview


We tested AlphaCodium on a challenging code generation dataset called CodeContests, which includes competitive programming problems from platforms such as Codeforces. The proposed flow consistently and significantly improves results. On the validation set, for example, GPT-4 accuracy (pass@5) increased from 19% with a single well-designed direct prompt to 44% with the AlphaCodium flow.

Many of the principles and best practices we acquired in this work, we believe, are broadly applicable to general code generation tasks.







Azure AI Use Cases



AI apps employ machine learning to continually learn and adapt, using advanced models powered by cloud computing to optimize their results over time. The insights they provide are much more informative and actionable than their non-AI counterparts.

Compare Traditional to AI Apps


Traditional AppsIntelligent AppsOutcomes
Learning and automationDepends on the code written by the programmer to perform a specific taskProgrammed to learn to perform the task by using data, algorithms, computation, and methodIntelligent AI apps can adapt to changing situations and user preferences, while traditional apps are limited by predefined rules and logic




ResponsivenessCan only respond to user inputs or requestsCan anticipate user needs and offer suggestions or solutionsIntelligent AI apps are proactive, making them more personalized and engaging than reactive traditional apps




Data CapabilitiesDesigned only to handle certain types of data or inputsDesigned to handle various types of data or inputs and even generate new data or outputAI apps are flexible and creative, allowing users to engage beyond traditional app limitations in ways they didn’t expect




ImplementationTypically built on a monolithic architecture and deployed on-premisesBuilt on the cloud using a microservices architecture

AI apps have enhanced scalability that lets them handle unlimited traffic and data


Consulting Use Case

To maximize the collective knowledge of its consultants, Arthur D. Little created an internal solution that draws on text analytics and other AI enrichment capabilities in Azure AI services to improve indexing and deliver consolidated data insights. Using this solution, consultants have access to summaries of documents with the abstractive summarization feature in Azure AI Language. Unlike extractive summarization—which only extracts sentences with relevant information—abstractive summarization generates concise and coherent summaries, saving the consultants from scanning long documents for information.

1. Enhanced summarization capabilities speed up consultant workflows

2. Improved security and confidentiality

3. Rapid innovation for products and services

Synthesized Voice for Customer Service Use Case

TIM pioneers synthesized voice service to increase customer satisfaction


Azure AI Services

Azure provides a wide range of tools and services that support AI development:

  • Azure OpenAI
    Service Azure OpenAI Service provides access to powerful language models from OpenAI, such as GPT-4, GPT-3.5 Turbo, Codex, DALL-E, and Whisper, that perform tasks such as content generation, summarization, semantic search, and natural language to code translation. Enterprises use this service to improve digital customer experience by adding chatbot/generative AI capabilities to customer-facing solutions with Azure AI services and Azure OpenAI.

  • Azure AI Search
    Azure AI Search lets enterprises build rich search experiences over their private and heterogeneous data sources in web, mobile, and enterprise applications. Azure AI Search utilizes advanced deep-learning models to provide contextual and relevant results. It also supports features such as semantic search, knowledge mining, summary results, faceting, suggestions, synonyms, geo-search, and more.

  • Azure AI services
    Azure AI services is a suite of out-of-the-box and customizable AI tools, APIs, and models that help modernize business processes faster. Azure AI services include services for vision, speech, language, decision, metrics advisor, immersive reader, and more. Enterprises use these services to build intelligent applications that automate document processing, improve customer service, understand the root cause of anomalies, and extract insights from content.

  • Azure Kubernetes Service
    Azure Kubernetes Service simplifies deploying managed Kubernetes clusters in Azure by offloading the operational overhead to Azure. Kubernetes is a popular open-source platform for orchestrating containers that run applications. Enterprises use AKS to run their containerized applications at scale with high availability and performance.

  • Azure Cosmos DB
    Azure Cosmos DB is a globally distributed, multi-model database service that offers single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale. Azure Cosmos DB supports multiple data models including document, key-value, graph, and column-family data. It also supports multiple APIs, such as native NoSQL, MongoDB API, PostgreSQL API, Apache Cassandra API, and more. Enterprises use Azure Cosmos DB to store and query their data in the most suitable model and API for their application needs



gartner.com-Gartner Reprint.pdf - AI enterprise trends 2024

Key Findings

  • Generative AI (GenAI) makes people more powerful personally and professionally.
  • Businesses will improve at overcoming their worst traits.
  • New threats create new responsibilities and communities.

Recommendations

To build and expand a digital business, executive leaders in end-user organizations must:
  • Use GenAI tools to improve the overall skill set of the workforce.
  • Position GenAI as a force multiplier in solving both new and perennial problems.
  • Meet unconventional threats by creating new roles to mitigate risk.
Analysis

Savvy executive leaders must broaden the horizons of IT professionals and business teams alike. They will stress the need to experiment with GenAI to learn its possibilities. They will embrace the risks of using GenAI so they can reap its rewards

Genai is opportunity and risk

GenAI breaks that mold. The popularity of ChatGPT has spurred many to action well past technology innovation. The existence of large language models (LLMs) covers a broad range of creative capabilities that keep building more excitement. But opposite that excitement is healthy skepticism and concerns about risk. GenAI can produce hallucinations or create suboptimal responses, it is little understood, and it creates both legal and ethical dilemmas.

Predictions



Potential Challenges



Candidate Solutions



Large US IT companies or companies with IT departments



List of Top 10 IT Companies in the USA by Revenue and Market Cap






Step-by-step guide for Example



sample code block

sample code block
 



Recommended Next Steps