/
AI Tools & Dev

AI Tools & Dev

Key Points


References

Reference_description_with_linked_URLs_______________________Notes______________________________________________________________















Key Concepts



Opensource Coding AI Agents.   linkedin

Opensource alternatives have become the go-to answer for even businesses trying to build their products.
So, if you have a competitive Model LLM provider like Claude and GPT - you can pretty much use this tech for free.

📌 Command Line Tools



1. Aider
- Local auto code completion using CLI.
- Aider is noted for being more economical in token usage than Cline.
- Not beginner-friendly.

https://lnkd.in/gubh-Xr3

2. Codename Goose by Block
- Full codebase migrations between frameworks/languages.
- Performance benchmarking automation.
- Using Model Context Protocol (MCP), Goose integrates natively with your Enterprise codebases

https://lnkd.in/gKj-dzBX


📌 Full Stack Applications



1. Bolt Dev (Previously oTTodev)
- Create an entire app with just a prompt.
- Support to choose between different models.
- Even supports Open-Router for free API access for a few models.

https://lnkd.in/g_MDh8Wt

2. Src Book

- Similar workflow to Bolt.dev
- Based on Typescript
- Supports Major LLM providers
- Two versions available:
a- Opensource Local download and setup
b- Use their hosted app with pricing options
- ❌ Their application is best suited for non-opensource LLMs like Claude Sonnet 3.5 and GPT-4o.

https://lnkd.in/gitfkQ2m

3. Zed

- A fully fledged AI code editor.
- A multiplayer code editor from the creators of Atom and Tree-sitter.
- Highly flexible in different languages.
- ❌ Not available for Windows at the moment

https://lnkd.in/gfJvrZin

VSCode Extensions



1. Cline (Previously Claude Dev)
- Very similar workflow to Cursor.
- It was built with Anthropic's MCP(Model Context Protocol) feature.
- Cline can launch a site in a headless browser, click, type, scroll, and capture screenshots + console logs.

https://lnkd.in/g9ZYvAY9

2. Roo-Cline (Personal Favourite)
- Forked version of Cline.
- It is a better version of Cline with a faster response time and more experimental features.
- Utilizes coding and thinking concepts from Aider.

https://lnkd.in/gAFtgyjB

3. Continue
- It has a workflow similar to that of early versions of GitHub Copilot.
- Best if you want simple auto-code completion and suggestions.

https://lnkd.in/gN-ykqNJ

4. Llama Coder
- Build your code bases locally
- Best for simple code completions without depending on external LLM APIs.

Check it out here: https://lnkd.in/gBiirEur

Note: This is not an exhaustive list, these are tools I've found to have the best workflow, but they may differ from yours.


You don't need expensive LLMs to make your AI Agents better
- Here's what I mean: https://www.linkedin.com/posts/rakeshgohel01_you-can-build-better-reasoning-ai-agents-activity-7292912659585282050-uwzI?utm_source=share&utm_medium=member_desktop

Check out a comparison of Paid Coding Agents
- https://www.linkedin.com/posts/rakeshgohel01_coding-agents-were-the-most-used-ai-agent-activity-7292187675921367040-CDBH?utm_source=share&utm_medium=member_desktop


QuantaLogic is a ReAct (Reasoning & Action) framework for building advanced AI agents.

It seamlessly integrates large language models (LLMs) with a robust tool system, enabling agents to understand, reason about, and execute complex tasks through natural language interaction.

The cli version include coding capabilities comparable to Aider.

https://github.com/quantalogic/quantalogic




Python tools 


Scikit 


Pytorch




Spring Ai 

https://spring.io/projects/spring-ai 

https://docs.spring.io/spring-ai/reference/


Spring AI is a project within the Spring Framework that helps Java developers incorporate artificial intelligence (AI) into their applications. It's an application framework that aims to use Spring's design principles, like modularity and portability, in the AI domain. Spring AI provides a Spring-friendly API and abstractions for developing AI applications, and it supports many AI models, including: 
OpenAI, Azure OpenAI, Amazon Bedrock, Hugging Face, Google VertexAI, Mistral AI, Stability AI, Ollama, PostgresML, and Transformers (ONNX). 
Spring AI simplifies the integration of AI functionality into Spring applications by streamlining interactions with AI models.It uses familiar Spring concepts and patterns, which can help with learning and adoption.For example, prompt templates can be used to populate placeholders within a model object, and the resulting string becomes the prompt for the AI model.


Spring AI is an application framework for AI engineering. Its goal is to apply to the AI domain Spring ecosystem design principles such as portability and modular design and promote using POJOs as the building blocks of an application to the AI domain.

Features

Portable API support across AI providers for Chat, text-to-image, and Embedding models. Both synchronous and stream API options are supported. Dropping down to access model-specific features is also supported.

Chat Models

  • Amazon Bedrock
    • Anthropic
    • Cohere's Command
    • AI21 Labs' Jurassic-2
    • Meta's LLama
    • Amazon's Titan
  • Anthropic Claud
  • Azure Open AI
  • Google Vertex AI
    • PaLM2
    • Gemini
  • Groq
  • HuggingFace - access thousands of models, including those from Meta such as Llama
  • MistralAI
  • MiniMax
  • Moonshot AI
  • Ollama - run AI models on your local machine
  • OpenAI
  • QianFan
  • ZhiPu AI
  • Watsonx.AI

Text-to-image Models

  • OpenAI with DALL-E
  • StabilityAI

Transcription (audio to text) Models

  • OpenAI

Embedding Models

  • Azure OpenAI
  • Amazon Bedrock
    • Cohere
    • Titan
  • Azure OpenAI
  • Mistral AI
  • MiniMax
  • Ollama
  • (ONNX) Transformers
  • OpenAI
  • PostgresML
  • QianFan
  • VertexAI
    • Text
    • Multimodal
    • PaLM2
  • ZhiPu AI

The Vector Store API provides portability across different providers, featuring a novel SQL-like metadata filtering API that maintains portability.

Vector Databases

  • Azure AI Service
  • Apache Cassandra
  • Chroma
  • Elasticsearch
  • GemFire
  • Milvus
  • MongoDB Atlas
  • Neo4j
  • OpenSearch
  • Oracle
  • PGvector
  • Pinecone
  • Qdrant
  • Redis
  • SAP Hana
  • Typesense
  • Weaviate

Spring Boot Auto Configuration and Starters for AI Models and Vector Stores.

Function calling You can declare java.util.Function implementations to OpenAI models for use in their prompt responses. You can directly provide these functions as objects or refer to their names if registered as a @Bean within the application context. This feature minimizes unnecessary code and enables the AI model to ask for more information to fulfill its response.

Models supported are

  • OpenAI
  • Azure OpenAI
  • VertexAI
  • Mistral AI
  • Anthropic Claude
  • Groq

ETL framework for Data Engineering

  • The core functionality of our ETL framework is to facilitate the transfer of documents to model providers using a Vector Store. The ETL framework is based on Java functional programming concepts, helping you chain together multiple steps.
  • We support reading documents in various formats, including PDF, JSON, and more.
  • The framework allows for data manipulation to suit your needs. This often involves splitting documents to adhere to context window limitations and enhancing them with keywords for improved document retrieval effectiveness.
  • Finally, processed documents are stored in the Vector Database, making them accessible for future retrieval.

Extensive reference documentation, sample applications, and workshop/course material.

Future releases will build upon this foundation to provide access to additional AI Models, for example, the Gemini multi-modal model just released by Google, a framework for evaluating the effectiveness of your AI application, more convenience APIs, and features to help solve the “query/summarize my documents” use cases. Check GitHub for details on upcoming releases.

Getting Started

You can get started in a few simple steps

  1. Install the Spring CLI, and then, in your shell, run the command.
spring boot new --from ai --name myai

This command creates a new application for you to start basic interactions with ChatGPT, just follow the instructions in the generated README file to get your API KEY and then

  1. Run the application
./mvw spring-boot:run
  1. And curl the endpoint:
curl localhost:8080/ai/simple

Want to get started in another way? View the Getting Started section in the reference documentation.


HuggingFace


huggingface-notes1


The platform where the machine learning community collaborates on models, datasets, and applications.

Hugging Face provides access to state-of-the-art AI models, focusing on NLP and transformers. The platform includes comprehensive, open-source libraries that simplify tasks like model training, data processing, and tokenization

Hugging Face is an open-source platform that serves as a central hub for accessing, building, and sharing pre-trained machine learning models, primarily focused on Natural Language Processing (NLP), allowing developers and researchers to easily incorporate state-of-the-art AI capabilities into their applications by providing a readily accessible collection of models and datasets, simplifying the process of creating AI and AML solutions across various tasks like text generation, translation, sentiment analysis, and more, all within a collaborative environment where users can contribute and share their own models and research findings. 

Key features of Hugging Face:
  • Extensive Model Library:
    Offers a vast repository of pre-trained models from leading researchers, covering a wide range of NLP tasks, which can be readily fine-tuned for specific use cases. 
  • Transformers Library:
    A core component providing efficient implementation of transformer architectures, enabling high-performance NLP tasks with ease. 
  • Community-Driven Collaboration:
    Fosters a collaborative environment where users can share their models, datasets, and research findings, accelerating AI development. 
  • User-Friendly Interface:
    Provides intuitive tools and APIs for accessing and utilizing models, making it accessible to a wide range of users, from experienced researchers to beginner developers. 
How Hugging Face is used in AI and AML solutions:
  • Rapid Prototyping:
    Quickly experiment with different pre-trained models to identify the best fit for a specific NLP task. 
  • Fine-tuning Models:
    Adapt existing pre-trained models to specific datasets and requirements for customized performance. 
  • Model Deployment:
    Easily integrate pre-trained models into applications, allowing for fast development and deployment of AI-powered features. 
  • Research Acceleration:
    Access cutting-edge AI models and research findings to stay updated with the latest advancements in the field. 
Overall, Hugging Face significantly simplifies the process of building AI and AML solutions by providing a comprehensive platform for accessing, utilizing, and contributing to a vast collection of high-quality pre-trained models, making advanced NLP capabilities readily available to a wider community of developers and researchers.

Getting-started-with-generative-ai-using-hugging-face-platform-on-aws/

Hugging Face Platform provides no-code and low-code solutions to train, deploy and publish state-of-art generative AI models for production workloads on managed infrastructure. In 2023, the Hugging Face Platform became available on AWS Marketplace, allowing AWS customers to directly subscribe to connect their AWS account with their Hugging Face account.

This allows customers to directly pay for Hugging Face usage with their AWS account, a new integrated billing method that makes it easy to manage payment for usage of all managed services by all members of your organization. Hugging Face has provided step-by-step guidance for customers how to subscribe and connect their Hugging Face account with their AWS account.

In this post, we will delve into the premium features and managed services provided by the Hugging Face Platform, elucidating the value they to customers.

Hugging Face is an AWS Specialization Partner with Competencies in Generative AI and Machine Learning. Its mission is to democratize machine learning through open source, open science, and Hugging Face products and services.


TIP - Fine-Tuning, Deploying, and Training Machine Learning Models With Hugging Face

users can also leverage it to create interactive model demos, access research resources, develop business applications, and evaluate ML models.

1. Chatbot Applications: Hugging Face provides a suite of chatbot applications, including Chatty, Talking Dog, Talking Egg, and Boloss. These chatbots are designed to engage and entertain you through AI-powered conversations. Additionally, they showcase the company’s NLP capabilities.

2. HuggingChat: HuggingChat is an open-source chatbot model developed with an impressive 30 billion parameters. It’s based on the latest LLaMa model from the OpenAssistant project. HuggingChat is designed to be lightweight and efficient, making it suitable for running on consumer hardware. What sets it apart is its strong commitment to data privacy, ensuring that messages are stored solely for user display and not used for research or identification purposes.

3. Expert Acceleration Program: This program will connect you with ML experts who provide dedicated support throughout the development and implementation of ML models. From research to production, these experts provide assistance, answer questions, and will help you find solutions to specific ML challenges. The program also promotes collaborative learning and guidance.

4. Private Hub: Similar to the public hub, the private hub will enable you to collaborate, experiment, train, and develop ML models. However, it provides a private group setting, ideal for businesses or teams that want to work on ML projects within a secure and restricted environment.

5. Inference Endpoints: Hugging Face’s Inference Endpoints service streamlines the deployment of models. You can deploy various models, including transformers and diffusers, on dedicated and managed infrastructure. This service provides production-ready APIs, eliminating the complexities of infrastructure management and MLOps. It operates on a pay-as-you-go structure and provides secure offline endpoints via a direct connection to your Virtual Private Cloud (VPC).

6. AutoTrain: AutoTrain is an automated solution for training, evaluating, and deploying ML models, even for those without extensive coding expertise. Leveraging it, you can define your tasks and upload the necessary data. Furthermore, AutoTrain takes care of model selection and training. This feature also simplifies and accelerates the ML model development process.

7. StarCoder: In collaboration with ServiceNow, Hugging Face developed StarCoder, an open-source language model tailored for code generation. StarCoder is trained in over eighty programming languages and can generate code based on descriptions or sample audio. It also acts as an alternative to other AI code-generating systems.

8. Models: Hugging Face hosts a vast library of ML models, boasting over 300,000 models as of the latest data. These models cover various domains and use cases, making them a valuable resource for researchers and developers looking for pre-trained models to accelerate their projects.

9. Data Sets: Effective ML model training relies on high-quality data sets. Hugging Face provides access to a range of community-uploaded data sets. These data sets cover diverse topics, including books, Wikipedia data, human preferences related to AI outputs, movie reviews from IMDb, and more.

10. Spaces: Hugging Face Spaces is a user-friendly environment that simplifies the showcasing of ML models. It provides computing resources that are necessary for hosting model demonstrations. Spaces is designed to be accessible even to users without extensive technical knowledge, making it easy to share and present ML models to a wider audience.

Collaborative and Open-Source Approach

1. Accessibility: Hugging Face democratizes AI development by lowering entry barriers. It provides pre-trained models, fine-tuning scripts, and deployment APIs, reducing the need for extensive computing resources and specialized skills. This accessibility accelerates the creation of Language Model (LLM) applications.

2. Integration: Hugging Face facilitates seamless integration with various Machine Learning (ML) frameworks. For instance, its Transformer library smoothly integrates with popular frameworks like PyTorch and TensorFlow, ensuring flexibility and compatibility in model development.

3. Rapid Prototyping: Hugging Face empowers developers with rapid prototyping and deployment capabilities for NLP and ML applications. This agility enables swift experimentation and testing of new AI-driven functionalities.

4. Community Support: Hugging Face boasts a vibrant community of AI enthusiasts and professionals. This collaborative ecosystem continually updates models, provides extensive documentation, and includes tutorials. Additionally, access to this community fosters knowledge-sharing and problem-solving.

5. Cost-Effectiveness: Building large ML models from scratch can be financially daunting. Hugging Face provides a cost-effective alternative. Leveraging its hosted models and resources, businesses can save significant costs while benefiting from AI-driven solutions that meet their specific needs.

Fine-Tuning, Deploying, and Training Machine Learning Models With Hugging Face

1. Enhancing Web Interactions: Hugging Face plays a pivotal role in web development by enabling the integration of AI-powered chatbots and NLP models into websites. This enhances user interactions and provides real-time responses, improving the overall user experience.

2. Chatbot Integration: Hugging Face facilitates the seamless integration of chatbot applications into websites. These chatbots can analyze user queries, understand context, and respond intelligently, making websites more engaging and interactive.

3. Content Curation: Hugging Face can assist in automating content curation on websites. AI models can analyze trending topics and user preferences to curate and recommend relevant content, keeping websites fresh and engaging.

4. Content Generation: Hugging Face’s NLP models can assist in web content generation. It can automatically generate articles, product descriptions, or other textual content, saving time and effort for web developers and content creators.

5. Search Functionality: Hugging Face’s NLP capabilities can enhance website search functionality. It enables more accurate search results by understanding user queries in natural language, leading to improved user satisfaction.

6. Personalization: Hugging Face can be used to implement personalized user experiences on websites. AI models can analyze user preferences and behavior to recommend relevant content or products, increasing user engagement and conversion rates.

7. Data Enrichment: In web development, data enrichment is crucial for improving user profiles and targeting. Hugging Face’s NLP models can help extract valuable insights from user-generated content, enabling better user segmentation and personalized marketing.

8. Natural Language Understanding: Hugging Face’s NLP models aid in understanding and processing user feedback, reviews, and comments on websites. This helps businesses gain valuable insights into customer sentiments and opinions.

9. Chat Support: Implementing Hugging Face chatbots on websites provides instant customer support. Users can get answers to their queries 24/7, leading to improved customer satisfaction and reduced support workload.

10. Multilingual Support: Hugging Face’s NLP models excel in multilingual applications, making websites accessible to a global audience. They can handle content translation, language detection, and localization, expanding a website’s reach.

11. Voice Search: Voice search functionality on websites can be enhanced using Hugging Face’s NLP capabilities. Leveraging its capabilities, users can interact with websites using voice commands, making navigation more convenient.

12. Analytics and Insights: Hugging Face can help web developers gain insights into user behavior, preferences, and engagement patterns through NLP-driven analytics. This data can be used for informed decision-making and website optimization.

Impacts from AI solutions

Hugging Face has had a profound impact across various industries by revolutionizing NLP, Computer Vision, Multimodal, and other AI applications. In healthcare, it has improved patient-doctor interactions through chatbots, while in Finance, it enhances fraud detection and customer support. Leveraging it, the education industry benefits from personalized learning, and e-commerce gains improved recommendation systems.


What is Hugging Face? - TT

Hugging Face is known for its Transformers Python library, which simplifies the process of downloading and training ML models. The library gives developers an efficient way to include one of the ML models hosted on Hugging Face in their workflow and create ML pipelines.

The Hugging Face Hub is where to find some of the main features of Hugging Face, including the following:

  • Models. Hugging Face hosts a large library of models that users can filter by type. As of this writing, there are more than 300,000 models on Hugging Face. Hugging Face also hosts some of the top open source ML models on the platform. Some of the models on the leaderboard at the time of this writing include the following:
  • Data sets. Data sets help train models to understand patterns and relationships between data -- and creating a good data set can be difficult. Hugging Face provides access to data sets uploaded by the community that users can access. Some example data sets in the Hugging Face library include the following:
    • the_pile_books3, which contains all data from Bibliotik in plain text. Bibliotik is a repository of 197,000 books.
    • wikipedia, which contains data from Wikipedia.
    • Anthropic/hh-rlhf, which contains human preference data about the helpfulness and harmlessness of AI outputs.
    • imdb, which contains a large collection of movie reviews.
  • Spaces. Machine learning models on their own typically require technical knowledge to implement and use. Spaces packages models in a user-friendly experience that lets users showcase their work. Hugging Face provides the computing resources necessary to host demos. Spaces doesn't require any technical knowledge to use. Some examples of Hugging Face Spaces include the following:
    • LoRA the Explorer image generator. Users can generate images in a variety of different styles based on a prompt.
    • MusicGen music generator. MusicGen lets users generate music based on a description of the desired output or sample audio.
    • Image to Story. Users can upload an image, and a large language model uses text generation to write a story based on it.

Sign up for Hugging Face

Hugging Face is free to sign up for as a community contributor. Users get a Git-based repository where they can store Models, Datasets and Spaces. After creating an account, users can do the following:

  • Check the activity feed.
  • Access the Hugging Face Hub.
  • Create organizations or private repositories.
  • Explore their profile and adjust settings.
  • Initiate a new Model, Dataset or Space.
  • Discover the latest trends within the Hugging Face community.
  • Review the organizations the user is a part of and access their specific sections.
  • Access useful ML resources and documentation.

Hugging Face also offers a paid pro account that gives users access to more features, and an enterprise account at a slightly higher rate. 


Some more AI tools



Potential Value Opportunities



some common 2024 AI Personal tools


Potential Challenges



Candidate Solutions



Step-by-step guide for Example



sample code block

sample code block
 



Recommended Next Steps



Related content