Skip to main content

Available courses

This free course is a gentle three-part introduction to data wrangling with Python. It is intended to introduce professionals working with data to critical methods and techniques in data wrangling.Kayalvizhi Thirumavalavan, a senior data scientist at SupportVectors, will be the main instructor.

8 Lessons
Data Science

ActivitiesTiming and DetailsMain SessionsMeets every Sunday from 11 AM to evening.Lab accessLab hardware resources are available 24/7 for the duration of the course.Help sessionsEvery day by appointment.Lab solutions walkthroughThe teaching staff AI engineers will announce their sessions on an ongoing basis.QuizThere will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions.In the dynamic and ever-expanding field of artificial intelligence, mastering Retrieval Augmented Generation (RAG) is crucial for professionals aiming to implement state-of-the-art AI solutions in enterprise settings. This 2-month course on "Advanced Methods of RAG Engineering" is meticulously designed for engineers seeking to deepen their expertise and apply advanced RAG techniques to build high-performance AI applications.This course emphasizes hands-on learning through extensive lab exercises, practical projects, quizzes, and case studies. Participants will engage with advanced concepts and practical applications of RAG, ensuring they acquire both theoretical knowledge and practical skills. The curriculum is designed to cover the latest advancements and techniques in RAG, providing a comprehensive understanding of the field.Course TopicsThe following key topics will be covered:What is RAG: This topic provides an introduction to the fundamentals and significance of Retrieval Augmented Generation. Participants will learn how RAG combines retrieval mechanisms with generative models to enhance the quality and relevance of generated outputs.Evolution of RAG Techniques: Exploring the advancements from baseline RAG methods to more complex and efficient techniques. This includes a historical overview and the technological breakthroughs that have shaped the current landscape of RAG.Overview of RAG Solutions: A comprehensive overview of various RAG solutions available in the industry. This topic covers the different tools, libraries, and frameworks that support RAG implementations and their respective use cases.LlamaIndex/LangChain: Practical applications using LlamaIndex and LangChain for RAG implementation. Participants will engage with these frameworks to understand their architecture and how to utilize them effectively in real-world scenarios.Data Ingestion - Unstructured: Techniques for ingesting and processing unstructured data for RAG systems. This includes methods for cleaning, organizing, and preparing data to ensure it is suitable for retrieval and generation tasks.Re-ranking Strategies: Advanced strategies for re-ranking retrieved documents to improve accuracy. Participants will learn about various re-ranking algorithms and how to implement them to enhance the performance of RAG systems.Text Embedding Models: Utilizing text embedding models to enhance retrieval and generation processes. This topic covers different embedding techniques and their applications in creating more effective RAG systems.RAG Evaluation - RAGAS: Methods for evaluating RAG systems using the RAGAS framework. Participants will learn how to assess the performance of their RAG models and identify areas for improvement.GraphRAG: Implementing RAG using graph-based approaches for improved performance. This topic explores the integration of graph theory into RAG, providing new ways to handle complex data structures.Hybrid Search and its Necessity: Understanding the role and implementation of hybrid search techniques. Participants will learn how to combine different search methods to create more robust and accurate RAG systems.Sparse Neural Retriever: ColBERTv2: Leveraging ColBERTv2 for efficient sparse neural retrieval. This topic covers the architecture and use cases of ColBERTv2, a state-of-the-art sparse retriever model.Sparse Neural Retriever: XTR: Advanced techniques using XTR for sparse neural retrieval. Participants will explore the capabilities and implementation of XTR in enhancing RAG performance.RAG Strategies: Query Transformation: Transforming queries to enhance RAG performance. This includes techniques for modifying and optimizing queries to improve retrieval accuracy and relevance.Sparse Neural Retriever: SPLADE: Utilizing SPLADE for sparse neural retrieval and its applications. This topic covers the architecture and benefits of SPLADE in creating efficient RAG systems.Multimodal Embedding Models: Integrating multimodal embedding models for comprehensive data representation. Participants will learn how to handle and integrate different data types (text, images, audio) in RAG systems.Multimodal Data Augmentation/Feature Engineering: Techniques for augmenting and engineering features from multimodal data. This includes methods for creating richer and more informative datasets to improve RAG performance.Vector Database Retrieval (Qdrant): Implementing vector database retrieval using Qdrant. This topic covers the architecture and use cases of Qdrant, a powerful vector database system.Optional: Chatbot Frontend: Developing chatbot frontends using tools like Streamlit, Chainlit, Dash, and Reflex. Participants will learn how to create user-friendly interfaces for their RAG applications.By the end of this course, participants will have acquired a deep understanding of advanced RAG techniques and will be equipped to apply these methods to develop and optimize high-performance AI applications in an enterprise context. Join us to enhance your proficiency and stay at the forefront of AI technology.RegistrationReserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check.Financial Aid:A 50% discount for registrants from Asia or Africa.Installment payment plans are available. Reach out to us by email or phone to discuss and get approval.Special discount (25% to 100%) for people with disabilities.Special discount for veterans.

18 Lessons

In a world driven by data, the ability to transform raw information into compelling visuals is a critical skill. Data visualization bridges the gap between analysis and communication, turning complex data into clear, actionable insights. This course is designed to equip you with the expertise to create impactful visual narratives, enabling you to convey ideas effectively across diverse audiences.

5 Lessons
Data Science

ActivitiesTiming and DetailsMain SessionsMeets every Monday and Wednesday evening; from 7 PM PSTLab accessLab hardware resources are available 24/7 for the duration of the course.Help sessionsEvery day by appointment.Lab solutions walkthroughThe teaching staff AI engineers will announce their sessions on an ongoing basis.QuizThere will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions.In today's technological landscape, mastering Machine Learning Operations (MLOps) and Large Language Model Operations (LLMOps) is essential for driving innovation and efficiency. This course offers a comprehensive exploration of MLOps and LLMOps, equipping participants with the knowledge and skills to streamline workflows, optimize deployments, and ensure robust model management in enterprise settings.Course Topics:What is MLOps and why does it matter?MLOps involves deploying and maintaining machine learning models in production reliably and efficiently. It bridges data science and operations, enhancing operational efficiency and model scalability.What is LLMOps?LLMOps extends MLOps principles to large language models, focusing on model fine-tuning, efficient inference, and handling computational requirements.MLOps Architecture and WorkflowsUnderstand key components of MLOps architecture, including data pipelines, model training workflows, and deployment strategies. Learn about tools like Jenkins, Airflow, MLFlow, Kyte, and Kubeflow.KubernetesLearn Kubernetes fundamentals for managing containerized applications and deploying machine learning models at scale.Feature Stores and Model ManagementExplore the role of feature stores in managing features and best practices for model management and storage, ensuring versioning and reproducibility.Ray FrameworkDiscover the Ray framework for scaling machine learning applications, including Ray Data, Ray Train, and Ray Serve.Scalable Model InferenceKey considerations in designing scalable inference serving solutions, optimizing performance, and managing resources.NVidia Triton and TensorRTLeverage NVidia Triton and TensorRT for efficient model inference serving in resource-intensive scenarios.ObservabilityImplement best practices for monitoring, logging, and metrics tracking to maintain model health and performance.Model SecurityEnsure model security with tools like LlamaGuard, focusing on threat detection and vulnerability assessment.By the end of this course, participants will be equipped to implement MLOps and LLMOps effectively, enhancing the efficiency, scalability, and security of their machine-learning workflows and driving greater value in their enterprise settings.RegistrationReserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check.Financial Aid:A 50% discount for registrants from developing nations in Asia or Africa.Installment payment plans are available. Reach out to us by email or phone to discuss and get approval.Special discount (25% to 100%) for people with disabilities.Special discount for veterans

18 Lessons
Data Science

This course introduces engineers to the fundamentals of graph theory, network science, and advanced topics in graph neural networks (GNNs). The course consists of 16 sessions, each 3 hours long, with additional 2-hour follow-up sessions with teaching assistants. Activities Timing and Details Main Sessions Meets every Thursday and Tuesday evening from 7 PM PST Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. Course Syllabus Introduction to Graphs and Basic Algorithms Definition and Types of Graphs: Directed, Undirected, Weighted, Unweighted Graph Terminology: Vertices, Edges, Degree, Paths, Cycles Graph Traversal Techniques: BFS, DFS Shortest Path Algorithms: Dijkstra, Bellman-Ford Minimum Spanning Tree: Prim's and Kruskal's Algorithms Network Science and Complex Network Analysis Introduction to Network Science Types of Networks: Social, Biological, Technological Networks Graph Laplacian and Eigen-value Decomposition: Spectral Graph Theory, Eigen-values and Eigen-vectors Measures of Centrality and Community Detection Techniques Measures of Centrality: Degree, Betweenness, Closeness, Eigenvector Centrality Community Detection Techniques: Modularity and Community Structure, Girvan-Newman Algorithm, Louvain Method Introduction to Network Representation Learning Importance of Network Representation Traditional Methods vs. Modern Approaches Node Embeddings: DeepWalk, Node2Vec Edge and Graph Embeddings Matrix Factorization and Deep Learning for Network Representation Matrix Factorization Methods: Adjacency Matrix Factorization, Laplacian Eigenmaps Deep Learning Approaches: Autoencoders for Graphs Advanced Embedding Techniques Variational Graph Autoencoders GraphSAGE Introduction to Graph Neural Networks (GNN) Basic Concepts and History Applications of GNNs Message Passing Neural Networks (MPNN) Message Passing Formulation Aggregation and Update Functions Graph Convolutional Networks (GCN) Convolutional Operations on Graphs Training and Applications of GCNs Graph Attention Networks (GAT) Attention Mechanism in Graphs Implementation and Use Cases Graph Neural Networks with Transformers Incorporating Transformers in GNNs Benefits and Challenges Advanced GNN Architectures Graph Recurrent Networks (GRN) Graph Autoencoders (GAE) Heterogeneous Graph Neural Networks Dealing with Different Types of Nodes and Edges Applications in Multi-Relational Data Applications in Drug Discovery and Healthcare GNNs in Drug-Target Interaction Predictive Models in Healthcare Applications in Social Network Analysis and Recommendation Systems Community Detection in Social Networks Influence and Information Spread GNNs in Recommendation Engines Enhancing Recommendations with Graph Data Case Studies and Future Directions AlphaFold and Protein Structure Prediction Understanding AlphaFold GNNs in Predicting Protein Structures Emerging Trends and Research Areas Ethical Considerations and Challenges Course Components Lectures: Cover theoretical concepts and practical applications. Labs: Hands-on exercises and projects to apply learned concepts. Quizzes: Periodic assessments to gauge understanding and retention. Projects: Real-world projects to demonstrate mastery of topics. Readings: Research papers and articles for in-depth knowledge. Outcome By the end of this course, participants will have a comprehensive understanding of graph neural networks, from fundamental concepts to advanced applications. They will be equipped with the skills to implement GNNs in various domains and contribute to cutting-edge research and development in the field. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

21 Lessons
Data Science

In today's rapidly evolving enterprise landscape, mastering the art of LLM (Large Language Model) fine-tuning has become indispensable for engineers seeking to enhance the performance and specificity of AI models. This course is meticulously designed to provide you with the practical skills and theoretical knowledge required to excel in fine-tuning LLMs, ensuring that your models not only perform optimally but also maintain robustness and accuracy across diverse applications.This 2-month intensive course is predominantly hands-on, featuring extensive lab exercises, projects, quizzes, and case studies. It focuses on contemporary fine-tuning techniques that go beyond simply feeding more data into an LLM. We delve into sophisticated methods that prevent catastrophic forgetting and reduce hallucinations while achieving measurable improvements in domain-specific tasks.Course OverviewThe course is structured to guide you through a comprehensive journey of LLM fine-tuning, starting from foundational concepts to advanced techniques. The key topics covered include:Overview of the Journey: Understand the evolution of fine-tuning practices and the importance of these techniques in modern AI. This module will set the stage by providing a historical context and highlighting the significance of fine-tuning in achieving high-performance AI models.Performance Evaluations: Learn how to assess the performance of fine-tuned models, ensuring they meet desired benchmarks. This includes understanding various evaluation metrics, tools, and best practices for validating model improvements.Dangers of Fine-tuning: Catastrophic Forgetting and Hallucinations: Explore the risks associated with fine-tuning and methods to mitigate these issues. This topic covers the challenges of maintaining model integrity and reliability during and after fine-tuning.Parameter-Efficient Fine Tuning: Discover techniques for efficient fine-tuning that optimize model performance without excessive computational costs. Learn about methods like AdapterFusion, Prompt Tuning, and other parameter-efficient strategies.LoRA Family of Methods: Gain insights into the Low-Rank Adaptation methods for fine-tuning LLMs. This section delves into the specifics of LoRA techniques, their advantages, and implementation strategies.Fine-tuning Encoder Models: Focus on strategies for fine-tuning encoder-based models for improved feature extraction. Learn how to adapt these models to various tasks and enhance their performance through targeted fine-tuning.Fine-tuning LLMs: Dive deep into advanced techniques specific to large language models. This includes understanding the nuances of tuning models like GPT, BERT, and their derivatives.Fine-tuning Vision Models: Learn how to adapt vision models to specific tasks through fine-tuning. This topic covers techniques for enhancing models like CNNs and Vision Transformers for improved image recognition and analysis.Fine-tuning Multimodal Models: Explore the challenges and solutions for fine-tuning models that integrate multiple data types. Understand how to handle and optimize models that process both text and image data simultaneously.Fine-tuning vs RAG: Understand the differences and complementarities between fine-tuning and Retrieval-Augmented Generation (RAG) techniques. Learn how to effectively combine these approaches for enhanced model performance.Cloud-based Fine Tuning (Modal/Runpod): Utilize cloud platforms for scalable fine-tuning solutions. This section provides insights into leveraging cloud resources to perform large-scale fine-tuning efficiently.Data Augmentation and Synthetic Data Creation: Enhance your datasets with synthetic data to improve model training and robustness. Explore various techniques for generating and using synthetic data to augment your training process.Fine-tuning Tools: Unsloth, Axolotl, and Others: Get hands-on experience with cutting-edge tools that facilitate efficient fine-tuning processes. Learn how to use these tools to streamline and optimize your fine-tuning workflows.By the end of this course, you will have developed a robust understanding of fine-tuning techniques and their practical applications across various domains. The skills you acquire will empower you to refine AI models that deliver precise, reliable, and high-performance results in any enterprise setting. Join us on this journey to become proficient in one of the most critical aspects of modern AI development.We look forward to welcoming you to this challenging and rewarding course, where you will engage with complex concepts, collaborate on innovative projects, and emerge as a skilled professional in the field of LLM fine-tuning.RegistrationReserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check.Financial Aid:A 50% discount for registrants from Asia or Africa.Installment payment plans are available. Reach out to us by email or phone to discuss and get approval.Special discount (25% to 100%) for people with disabilities.Special discount for veterans.

19 Lessons

This workshop focuses on in-depth expertise in dominant neural network architectures, large language models, and transformer architectures. In this sixteen-week course, you will develop an understanding and fluency in one facet of this fascinating field each week.ActivitiesTiming and DetailsMain SessionsMeets every Tuesday and Thursday evenings; from 7 PM PSTLab accessLab hardware resources are available 24/7 for the duration of the course.Help sessionsEvery day by appointment.Lab solutions walkthroughThe teaching staff AI engineers will announce their sessions on an ongoing basis.QuizThere will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions.Course DescriptionThis course is designed to introduce students to the foundational and advanced concepts of artificial intelligence, with a focus on neural networks, large language models, and generative AI. Through a combination of lectures, hands-on coding exercises, and project work, students will gain a deep understanding of the mathematical and technical underpinnings of AI technologies. They will explore the theory behind neural networks, delve into various architectures, and understand the applications and implications of AI in the real world.Course ObjectivesUnderstand the mathematical foundations of AI, including the universal approximation theorem.Learn about activation functions, back-propagation, regularization, and optimization techniques.Explore various neural network architectures, including feed-forward nets, CNNs, RNNs, and Transformers.Gain hands-on experience with Autoencoders, GANs, Diffusion Models, Normalized Flows, and Energy Models.Apply knowledge in practical coding exercises and projects to solve real-world problems.In-depth study of various transformer based architecturesRegistrationReserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check.Financial Aid:A 50% discount for registrants from Asia or Africa.Installment payment plans are available. Reach out to us by email or phone to discuss and get approval.Special discount (25% to 100%) for people with disabilities.Special discount for veterans.

47 Lessons
Data Science

In the rapidly evolving landscape of artificial intelligence, mastering MLOps, and LLMOps is essential for any enterprise aiming to leverage AI effectively. Our 8-week course is designed specifically for professional engineers who seek to deepen their understanding and practical skills in developing and deploying sophisticated AI systems.ActivitiesTiming and DetailsMain SessionsMeets every Tuesday and Thursday evening; from 7 PM PSTLab accessLab hardware resources are available 24/7 for the duration of the course.Help sessionsEvery day by appointment.Lab solutions walkthroughThe teaching staff AI engineers will announce their sessions on an ongoing basis.QuizThere will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions.This course offers a balanced mix of theoretical foundations and hands-on experience. Through a series of projects and labs, participants will gain practical knowledge and insights into the intricacies of AI agent development. We will use the AutoGen and CrewAI frameworks to build complex AI applications, providing real-world contexts for learning and application.Course TopicsThe following key topics will be covered:Introduction to AI Agents: Understanding the basics and significance of AI agents in modern applications.Agentic Reasoning and Planning: Techniques and methodologies enabling AI agents to reason and plan effectively.Multi-Agent Orchestration: Strategies for managing and coordinating multiple AI agents to work together harmoniously.AutoGen Framework: Hands-on projects using the AutoGen framework to build and deploy AI agents.CrewAI Framework: Practical labs utilizing CrewAI for complex AI application development.Tools and Technologies: Exploration of various tools and technologies that support AI agent development and deployment.Research Papers: Reading and discussing significant research papers to understand current trends and advancements in the field.Practical Applications: Case studies and real-world examples to illustrate the application of AI agents in various industries.By the end of this course, participants will be equipped with the skills and knowledge required to design, implement, and manage AI agents in enterprise environments. Join us to advance your expertise and stay at the forefront of AI technology.RegistrationReserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check.Financial Aid:A 50% discount for registrants from Asia or Africa.Installment payment plans are available. Reach out to us by email or phone to discuss and get approval.Special discount (25% to 100%) for people with disabilities.Special discount for veterans.

26 Lessons

Activities Timing and Details Main Sessions Meets every Sunday from 11 AM to evening. Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. In the ever-evolving field of artificial intelligence, mastering the nuances of prompt engineering is essential for professionals aiming to harness the full potential of generative AI. This 2-month course on "Advanced Techniques in Prompt Engineering" is meticulously designed for engineers who are keen to deepen their expertise and apply advanced prompt engineering techniques in an enterprise setting. This course emphasizes a hands-on approach, with extensive lab exercises, practical projects, quizzes, and case studies. Participants will gain practical experience in building generative AI applications and will engage with cutting-edge research papers to stay abreast of recent advancements in the field. Course Topics The following key topics will be covered: LLM Inference Endpoints and Frameworks: Understanding the infrastructure and frameworks supporting large language models. Overview of Prompting: Exploring various types of prompting, including text, visual, audio, and video. Prompting as Control Flow: Techniques for using prompts to control the flow of AI operations. Limitations of LLMs and Manual Prompting: Identifying and addressing the constraints of large language models. COSTAR and Other Prompting Techniques: Advanced techniques for effective prompting. Case Studies & Domain Applications: Practical examples and applications in various domains. Prompting in Structured Output Generation: Methods for generating structured outputs through prompts. Function Call-based Prompting: Instructor: Using function calls to guide prompting. Constrained Sampling: Outlines: Techniques for constrained sampling to improve prompt outcomes. Constrained Sampling: Microsoft Guidance: Insights from Microsoft's approaches to constrained sampling. Programmatic Prompting and Automation: Automating prompt generation and optimization. Programmatic Prompting: DSPy: Leveraging DSPy for programmatic prompting. Programmatic Prompting: SAMMO: Using SAMMO for advanced prompt automation. Prompt Program Demo: STORM: Demonstrations of prompt programs using STORM. Prompt Monitoring: Techniques for observing and monitoring prompt performance. LLM Observability: Arize Phoenix: Monitoring large language models with Arize Phoenix. LLM Observability: LogFire: Using LogFire for prompt observability. LLM Observability: LangFuse: LangFuse for comprehensive LLM monitoring. Prompt Security: Ensuring the security and integrity of prompt engineering processes. Synthetic Data Generation: Creating synthetic data for robust prompt development. TextGrad: Advanced techniques for secure prompt engineering. By the end of this course, participants will possess a comprehensive understanding of advanced prompt engineering techniques and will be able to apply these skills to develop and optimize generative AI applications in an enterprise context. Join us to enhance your proficiency and remain at the cutting edge of AI technology. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

10 Lessons

In a world driven by data, the ability to transform raw information into compelling visuals is a critical skill. Data visualization bridges the gap between analysis and communication, turning complex data into clear, actionable insights. This course is designed to equip you with the expertise to create impactful visual narratives, enabling you to convey ideas effectively across diverse audiences.

5 Lessons

In this course, you will learn essential data wrangling skills using Python, including data transformations, cleanup, and handling missing values. You’ll work with pandas and NumPy to manipulate tabular data, perform string and DateTime operations, and learn data assembly techniques. The course also covers preprocessing data for machine learning using Scikit-Learn. You can attend either in person at the Supportvectors facility or remotely via Zoom. All sessions are live-streamed, recorded, and accessible on the course portal. By the end of the course, you’ll have the skills needed to prepare data for analysis and machine learning.

5 Lessons

 ActivitiesTiming and DetailsMain Bootcamp SessionsMeets every Saturday from 11 AM to evening.Lab accessLab hardware resources are available 24/7 for the duration of the course.Help sessionsEvery day by appointment.Lab solutions walkthroughThe teaching staff AI engineers will announce their sessions on an ongoing basis.QuizThere will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions.This is the eagerly awaited hackathon-style, coding-centered BootCamp centered around real-world, exciting projects. The goal is to make you profoundly confident and fluent in applying LLMs to solve an extensive range of real-world problems in vector embeddings, semantic AI search, retrieval-augment generation, multi-modal learning, video comprehension, adversarial attacks, computer vision, audio-processing, natural language processing, tabular data, anomaly and fraud detection, healthcare applications, and clever techniques of prompt engineering.You will work in teams of 4-6 engineers in an environment that closely reproduces the innovation and development that is the quintessence of the Silicon Valley spirit. Each team will get a meeting room with a state-of-the-art multimedia setup and a powerful AI server with RTX4090 for exclusive use during the boot camp. These will also be accessible remotely so that the teams can continue to work on them during the workdays.You will also have unrestricted access to a massive 4-GPU AI server at the admin-privilege level for the three-month duration of this boot camp on a time-shared basis. Besides this, you will also benefit from the 10-gigabit networking and the large server cluster to take your projects to actual production. SupportVectors will let you use the compute resources for an additional four weeks if you need to finish any remaining aspects of your projects.A dedicated team of experienced teaching faculty with deep industry and AI experience on large-scale projects will guide you throughout the boot camp. The AI experts bring together deep academic backgrounds, educational excellence, and years of technical leadership roles in the industry.Activities and components of the boot-campAn extensive compilation of guided exercises and labs with Jupyter notebooksLecture sessions with clear explanations of the topics involvedReal-world group projects that aim to be starters for a potential startup, either outside or within your current company/place of workQuizzes and assessments for self-evaluation of progress“Cambrian Explosion” series — a collective approach to discuss the latest breakthroughs and happenings in LLMProject demonstrations and cohort feedbackResearch paper readings with lucid explanations bring out the main ideas and relevance.A friendly competition between the teams and sharing of best practicesA reasonably extensive starter code-base to guide your progress and developmentA faculty of highly experienced and technical instructors, teaching assistants, and lab assistants providing help around the clock as neededFor those seeking jobs, mock interview practice, guidance with resumes, and job placement helpSupportVectors Creative Studio will help you create professional videos explaining your projects and provide creative writing help in deriving detailed technical articles around your projects.Breakfast and lunch are provided during the boot camp. The break room is stocked with snacks, fruits, beverages, and gourmet coffee.RegistrationReserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check.Financial Aid:A 50% discount for registrants from Asia or Africa.Installment payment plans are available. Reach out to us by email or phone to discuss and get approval.Special discount (25% to 100%) for people with disabilities.Special discount for veterans.

44 Lessons

Activities Timing and Details Main Sessions Meets every Monday and Wednesday evening; from 7 PM PST Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. In today's technological landscape, mastering Machine Learning Operations (MLOps) and Large Language Model Operations (LLMOps) is essential for driving innovation and efficiency. This course offers a comprehensive exploration of MLOps and LLMOps, equipping participants with the knowledge and skills to streamline workflows, optimize deployments, and ensure robust model management in enterprise settings. Course Topics: What is MLOps and why does it matter? MLOps involves deploying and maintaining machine learning models in production reliably and efficiently. It bridges data science and operations, enhancing operational efficiency and model scalability. What is LLMOps? LLMOps extends MLOps principles to large language models, focusing on model fine-tuning, efficient inference, and handling computational requirements. MLOps Architecture and Workflows Understand key components of MLOps architecture, including data pipelines, model training workflows, and deployment strategies. Learn about tools like Jenkins, Airflow, MLFlow, Kyte, and Kubeflow. Kubernetes Learn Kubernetes fundamentals for managing containerized applications and deploying machine learning models at scale. Feature Stores and Model Management Explore the role of feature stores in managing features and best practices for model management and storage, ensuring versioning and reproducibility. Ray Framework Discover the Ray framework for scaling machine learning applications, including Ray Data, Ray Train, and Ray Serve. Scalable Model Inference Key considerations in designing scalable inference serving solutions, optimizing performance, and managing resources. NVidia Triton and TensorRT Leverage NVidia Triton and TensorRT for efficient model inference serving in resource-intensive scenarios. Observability Implement best practices for monitoring, logging, and metrics tracking to maintain model health and performance. Model Security Ensure model security with tools like LlamaGuard, focusing on threat detection and vulnerability assessment. By the end of this course, participants will be equipped to implement MLOps and LLMOps effectively, enhancing the efficiency, scalability, and security of their machine-learning workflows and driving greater value in their enterprise settings. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from developing nations in Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans

18 Lessons

This workshop focuses on in-depth expertise in dominant neural network architectures, large language models, and transformer architectures. In this sixteen-week course, you will develop an understanding and fluency in one facet of this fascinating field each week. Activities Timing and Details Main Sessions Meets every Tuesday and Thursday evenings; from 7 PM PST Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. Course Description This course is designed to introduce students to the foundational and advanced concepts of artificial intelligence, with a focus on neural networks, large language models, and generative AI. Through a combination of lectures, hands-on coding exercises, and project work, students will gain a deep understanding of the mathematical and technical underpinnings of AI technologies. They will explore the theory behind neural networks, delve into various architectures, and understand the applications and implications of AI in the real world. Course Objectives Understand the mathematical foundations of AI, including the universal approximation theorem. Learn about activation functions, back-propagation, regularization, and optimization techniques. Explore various neural network architectures, including feed-forward nets, CNNs, RNNs, and Transformers. Gain hands-on experience with Autoencoders, GANs, Diffusion Models, Normalized Flows, and Energy Models. Apply knowledge in practical coding exercises and projects to solve real-world problems. In-depth study of various transformer based architectures Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

47 Lessons
Data Science

In the rapidly evolving landscape of artificial intelligence, mastering MLOps, and LLMOps is essential for any enterprise aiming to leverage AI effectively. Our 8-week course is designed specifically for professional engineers who seek to deepen their understanding and practical skills in developing and deploying sophisticated AI systems. Activities Timing and Details Main Sessions Meets every Tuesday and Thursday evening; from 7 PM PST Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. This course offers a balanced mix of theoretical foundations and hands-on experience. Through a series of projects and labs, participants will gain practical knowledge and insights into the intricacies of AI agent development. We will use the AutoGen and CrewAI frameworks to build complex AI applications, providing real-world contexts for learning and application. Course Topics The following key topics will be covered: Introduction to AI Agents: Understanding the basics and significance of AI agents in modern applications. Agentic Reasoning and Planning: Techniques and methodologies enabling AI agents to reason and plan effectively. Multi-Agent Orchestration: Strategies for managing and coordinating multiple AI agents to work together harmoniously. AutoGen Framework: Hands-on projects using the AutoGen framework to build and deploy AI agents. CrewAI Framework: Practical labs utilizing CrewAI for complex AI application development. Tools and Technologies: Exploration of various tools and technologies that support AI agent development and deployment. Research Papers: Reading and discussing significant research papers to understand current trends and advancements in the field. Practical Applications: Case studies and real-world examples to illustrate the application of AI agents in various industries. By the end of this course, participants will be equipped with the skills and knowledge required to design, implement, and manage AI agents in enterprise environments. Join us to advance your expertise and stay at the forefront of AI technology. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

26 Lessons

  Activities Timing and Details Main Bootcamp Sessions Meets every Saturday from 11 AM to evening. Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. This is the eagerly awaited hackathon-style, coding-centered BootCamp centered around real-world, exciting projects. The goal is to make you profoundly confident and fluent in applying LLMs to solve an extensive range of real-world problems in vector embeddings, semantic AI search, retrieval-augment generation, multi-modal learning, video comprehension, adversarial attacks, computer vision, audio-processing, natural language processing, tabular data, anomaly and fraud detection, healthcare applications, and clever techniques of prompt engineering. You will work in teams of 4-6 engineers in an environment that closely reproduces the innovation and development that is the quintessence of the Silicon Valley spirit. Each team will get a meeting room with a state-of-the-art multimedia setup and a powerful AI server with RTX4090 for exclusive use during the boot camp. These will also be accessible remotely so that the teams can continue to work on them during the workdays. You will also have unrestricted access to a massive 4-GPU AI server at the admin-privilege level for the three-month duration of this boot camp on a time-shared basis. Besides this, you will also benefit from the 10-gigabit networking and the large server cluster to take your projects to actual production. SupportVectors will let you use the compute resources for an additional four weeks if you need to finish any remaining aspects of your projects. A dedicated team of experienced teaching faculty with deep industry and AI experience on large-scale projects will guide you throughout the boot camp. The AI experts bring together deep academic backgrounds, educational excellence, and years of technical leadership roles in the industry. Activities and components of the boot-camp An extensive compilation of guided exercises and labs with Jupyter notebooks Lecture sessions with clear explanations of the topics involved Real-world group projects that aim to be starters for a potential startup, either outside or within your current company/place of work Quizzes and assessments for self-evaluation of progress “Cambrian Explosion” series — a collective approach to discuss the latest breakthroughs and happenings in LLM Project demonstrations and cohort feedback Research paper readings with lucid explanations bring out the main ideas and relevance. A friendly competition between the teams and sharing of best practices A reasonably extensive starter code-base to guide your progress and development A faculty of highly experienced and technical instructors, teaching assistants, and lab assistants providing help around the clock as needed For those seeking jobs, mock interview practice, guidance with resumes, and job placement help SupportVectors Creative Studio will help you create professional videos explaining your projects and provide creative writing help in deriving detailed technical articles around your projects. Breakfast and lunch are provided during the boot camp. The break room is stocked with snacks, fruits, beverages, and gourmet coffee. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

44 Lessons

Activities Timing and Details Main Sessions Meets every Sunday from 11 AM to evening. Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. In the ever-evolving field of artificial intelligence, mastering the nuances of prompt engineering is essential for professionals aiming to harness the full potential of generative AI. This 2-month course on "Advanced Techniques in Prompt Engineering" is meticulously designed for engineers who are keen to deepen their expertise and apply advanced prompt engineering techniques in an enterprise setting. This course emphasizes a hands-on approach, with extensive lab exercises, practical projects, quizzes, and case studies. Participants will gain practical experience in building generative AI applications and will engage with cutting-edge research papers to stay abreast of recent advancements in the field. Course Topics The following key topics will be covered: LLM Inference Endpoints and Frameworks: Understanding the infrastructure and frameworks supporting large language models. Overview of Prompting: Exploring various types of prompting, including text, visual, audio, and video. Prompting as Control Flow: Techniques for using prompts to control the flow of AI operations. Limitations of LLMs and Manual Prompting: Identifying and addressing the constraints of large language models. COSTAR and Other Prompting Techniques: Advanced techniques for effective prompting. Case Studies & Domain Applications: Practical examples and applications in various domains. Prompting in Structured Output Generation: Methods for generating structured outputs through prompts. Function Call-based Prompting: Instructor: Using function calls to guide prompting. Constrained Sampling: Outlines: Techniques for constrained sampling to improve prompt outcomes. Constrained Sampling: Microsoft Guidance: Insights from Microsoft's approaches to constrained sampling. Programmatic Prompting and Automation: Automating prompt generation and optimization. Programmatic Prompting: DSPy: Leveraging DSPy for programmatic prompting. Programmatic Prompting: SAMMO: Using SAMMO for advanced prompt automation. Prompt Program Demo: STORM: Demonstrations of prompt programs using STORM. Prompt Monitoring: Techniques for observing and monitoring prompt performance. LLM Observability: Arize Phoenix: Monitoring large language models with Arize Phoenix. LLM Observability: LogFire: Using LogFire for prompt observability. LLM Observability: LangFuse: LangFuse for comprehensive LLM monitoring. Prompt Security: Ensuring the security and integrity of prompt engineering processes. Synthetic Data Generation: Creating synthetic data for robust prompt development. TextGrad: Advanced techniques for secure prompt engineering. By the end of this course, participants will possess a comprehensive understanding of advanced prompt engineering techniques and will be able to apply these skills to develop and optimize generative AI applications in an enterprise context. Join us to enhance your proficiency and remain at the cutting edge of AI technology. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

10 Lessons

Activities Timing and Details Main Bootcamp Sessions Meets every Sunday from 11 AM to evening. Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. This advanced LLM BootCamp is the eagerly awaited hackathon-style, coding-centered Boot-camp centered around real-world, exciting projects. The goal is to make you deeply confident and fluent in applying LLMs to solve a large range of real-world problems in vector embeddings, semantic AI search, advanced RAG (retrieval-augment generation) techniques, multi-modal learning, video comprehension, adversarial attacks, computer vision, audio-processing, natural language processing,  and clever techniques of prompt engineering. Syllabus PyTorch: Build models from scratch Advanced Prompt Engineering Methods Advanced RAG (Retrieval Augment Generation) Methods In-depth Open-source tools: LLamaIndex, LangChain, HuggingFace, etc. Natural Language to Code (SqL, etc.) Adversarial Attacks and Defenses Diffusion Models for images and videos Advanced Methods in Document AI Activities and components of the boot-camp Extensive compilation of guided exercises and labs with Jupyter-notebooks Lecture sessions with clear explanations of the topics involved Real-world group projects that aim to be starters for a potential startup, either outside or within your current company/place of work Quizzes and assessments for self-evaluation of progress “Cambrian Explosion” series — a collective approach to discuss the latest breakthroughs and happenings in LLM Project demonstrations and cohort feedback Research paper readings with lucid explanation bringing out the main ideas and relevance A friendly competition between the teams, and sharing of best practices A fairly extensive starter code-base to guide your progress and development A faculty of highly experienced and technical instructors, teaching assistants, and lab-assistants providing help around the clock as needed For those seeking jobs, mock-interviews practice, guidance with resume and job-placement help You will work either alone or in teams of 4-6 engineers working in an environment that closely reproduces the innovation and development that is the quintessence of the Silicon Valley spirit. Each team will get for exclusive use during the Bootcamp a team-meeting room with a state-of-the-art multimedia setup, and a powerful AI server with banks of RTX4090. These will be accessible remotely too, so that the teams can continue to work on it during the workdays too. You will also have unrestricted access to a 4-GPU massive AI server at the admin-privilege level for the three-month duration of this boot camp, on a time-shared basis. Besides this, you will also have the benefit of the 10-gigabit networking and the large server cluster to take your projects to actual production. SupportVectors will let you use the compute resources for an additional 4 weeks if you need to finish any remaining aspects of your projects. A dedicated team of experienced teaching faculty, with deep industry and AI experience on large-scale projects, will guide you throughout the boot camp. The AI experts bring together deep academic backgrounds, educational excellence as well as years of technical leadership roles in the industry. SupportVectors Creative Studio will help you create professional videos explaining your projects and will provide creative writing help in deriving detailed technical articles around your projects. During the boot camp, breakfast and lunch are provided. The break-room is stocked with an endless supply of snacks, fruits, beverages, and gourmet coffee. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

9 Lessons

Activities Timing and Details Main Sessions Meets every Sunday from 11 AM to evening. Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. In the dynamic and ever-expanding field of artificial intelligence, mastering Retrieval Augmented Generation (RAG) is crucial for professionals aiming to implement state-of-the-art AI solutions in enterprise settings. This 2-month course on "Advanced Methods of RAG Engineering" is meticulously designed for engineers seeking to deepen their expertise and apply advanced RAG techniques to build high-performance AI applications. This course emphasizes hands-on learning through extensive lab exercises, practical projects, quizzes, and case studies. Participants will engage with advanced concepts and practical applications of RAG, ensuring they acquire both theoretical knowledge and practical skills. The curriculum is designed to cover the latest advancements and techniques in RAG, providing a comprehensive understanding of the field. Course Topics The following key topics will be covered: What is RAG: This topic provides an introduction to the fundamentals and significance of Retrieval Augmented Generation. Participants will learn how RAG combines retrieval mechanisms with generative models to enhance the quality and relevance of generated outputs. Evolution of RAG Techniques: Exploring the advancements from baseline RAG methods to more complex and efficient techniques. This includes a historical overview and the technological breakthroughs that have shaped the current landscape of RAG. Overview of RAG Solutions: A comprehensive overview of various RAG solutions available in the industry. This topic covers the different tools, libraries, and frameworks that support RAG implementations and their respective use cases. LlamaIndex/LangChain: Practical applications using LlamaIndex and LangChain for RAG implementation. Participants will engage with these frameworks to understand their architecture and how to utilize them effectively in real-world scenarios. Data Ingestion - Unstructured: Techniques for ingesting and processing unstructured data for RAG systems. This includes methods for cleaning, organizing, and preparing data to ensure it is suitable for retrieval and generation tasks. Re-ranking Strategies: Advanced strategies for re-ranking retrieved documents to improve accuracy. Participants will learn about various re-ranking algorithms and how to implement them to enhance the performance of RAG systems. Text Embedding Models: Utilizing text embedding models to enhance retrieval and generation processes. This topic covers different embedding techniques and their applications in creating more effective RAG systems. RAG Evaluation - RAGAS: Methods for evaluating RAG systems using the RAGAS framework. Participants will learn how to assess the performance of their RAG models and identify areas for improvement. GraphRAG: Implementing RAG using graph-based approaches for improved performance. This topic explores the integration of graph theory into RAG, providing new ways to handle complex data structures. Hybrid Search and its Necessity: Understanding the role and implementation of hybrid search techniques. Participants will learn how to combine different search methods to create more robust and accurate RAG systems. Sparse Neural Retriever: ColBERTv2: Leveraging ColBERTv2 for efficient sparse neural retrieval. This topic covers the architecture and use cases of ColBERTv2, a state-of-the-art sparse retriever model. Sparse Neural Retriever: XTR: Advanced techniques using XTR for sparse neural retrieval. Participants will explore the capabilities and implementation of XTR in enhancing RAG performance. RAG Strategies: Query Transformation: Transforming queries to enhance RAG performance. This includes techniques for modifying and optimizing queries to improve retrieval accuracy and relevance. Sparse Neural Retriever: SPLADE: Utilizing SPLADE for sparse neural retrieval and its applications. This topic covers the architecture and benefits of SPLADE in creating efficient RAG systems. Multimodal Embedding Models: Integrating multimodal embedding models for comprehensive data representation. Participants will learn how to handle and integrate different data types (text, images, audio) in RAG systems. Multimodal Data Augmentation/Feature Engineering: Techniques for augmenting and engineering features from multimodal data. This includes methods for creating richer and more informative datasets to improve RAG performance. Vector Database Retrieval (Qdrant): Implementing vector database retrieval using Qdrant. This topic covers the architecture and use cases of Qdrant, a powerful vector database system. Optional: Chatbot Frontend: Developing chatbot frontends using tools like Streamlit, Chainlit, Dash, and Reflex. Participants will learn how to create user-friendly interfaces for their RAG applications. By the end of this course, participants will have acquired a deep understanding of advanced RAG techniques and will be equipped to apply these methods to develop and optimize high-performance AI applications in an enterprise context. Join us to enhance your proficiency and stay at the forefront of AI technology. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

18 Lessons

This course introduces engineers to the fundamentals of graph theory, network science, and advanced topics in graph neural networks (GNNs). The course consists of 16 sessions, each 3 hours long, with additional 2-hour follow-up sessions with teaching assistants. Activities Timing and Details Main Sessions Meets every Thursday and Tuesday evening from 7 PM PST Lab access Lab hardware resources are available 24/7 for the duration of the course. Help sessions Every day by appointment. Lab solutions walkthrough The teaching staff AI engineers will announce their sessions on an ongoing basis. Quiz There will be two quizzes on each topic. The teaching AI engineers will hold review sessions to explain the solutions. Course Syllabus Introduction to Graphs and Basic Algorithms Definition and Types of Graphs: Directed, Undirected, Weighted, Unweighted Graph Terminology: Vertices, Edges, Degree, Paths, Cycles Graph Traversal Techniques: BFS, DFS Shortest Path Algorithms: Dijkstra, Bellman-Ford Minimum Spanning Tree: Prim's and Kruskal's Algorithms Network Science and Complex Network Analysis Introduction to Network Science Types of Networks: Social, Biological, Technological Networks Graph Laplacian and Eigen-value Decomposition: Spectral Graph Theory, Eigen-values and Eigen-vectors Measures of Centrality and Community Detection Techniques Measures of Centrality: Degree, Betweenness, Closeness, Eigenvector Centrality Community Detection Techniques: Modularity and Community Structure, Girvan-Newman Algorithm, Louvain Method Introduction to Network Representation Learning Importance of Network Representation Traditional Methods vs. Modern Approaches Node Embeddings: DeepWalk, Node2Vec Edge and Graph Embeddings Matrix Factorization and Deep Learning for Network Representation Matrix Factorization Methods: Adjacency Matrix Factorization, Laplacian Eigenmaps Deep Learning Approaches: Autoencoders for Graphs Advanced Embedding Techniques Variational Graph Autoencoders GraphSAGE Introduction to Graph Neural Networks (GNN) Basic Concepts and History Applications of GNNs Message Passing Neural Networks (MPNN) Message Passing Formulation Aggregation and Update Functions Graph Convolutional Networks (GCN) Convolutional Operations on Graphs Training and Applications of GCNs Graph Attention Networks (GAT) Attention Mechanism in Graphs Implementation and Use Cases Graph Neural Networks with Transformers Incorporating Transformers in GNNs Benefits and Challenges Advanced GNN Architectures Graph Recurrent Networks (GRN) Graph Autoencoders (GAE) Heterogeneous Graph Neural Networks Dealing with Different Types of Nodes and Edges Applications in Multi-Relational Data Applications in Drug Discovery and Healthcare GNNs in Drug-Target Interaction Predictive Models in Healthcare Applications in Social Network Analysis and Recommendation Systems Community Detection in Social Networks Influence and Information Spread GNNs in Recommendation Engines Enhancing Recommendations with Graph Data Case Studies and Future Directions AlphaFold and Protein Structure Prediction Understanding AlphaFold GNNs in Predicting Protein Structures Emerging Trends and Research Areas Ethical Considerations and Challenges Course Components Lectures: Cover theoretical concepts and practical applications. Labs: Hands-on exercises and projects to apply learned concepts. Quizzes: Periodic assessments to gauge understanding and retention. Projects: Real-world projects to demonstrate mastery of topics. Readings: Research papers and articles for in-depth knowledge. Outcome By the end of this course, participants will have a comprehensive understanding of graph neural networks, from fundamental concepts to advanced applications. They will be equipped with the skills to implement GNNs in various domains and contribute to cutting-edge research and development in the field. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

21 Lessons

In today's rapidly evolving enterprise landscape, mastering the art of LLM (Large Language Model) fine-tuning has become indispensable for engineers seeking to enhance the performance and specificity of AI models. This course is meticulously designed to provide you with the practical skills and theoretical knowledge required to excel in fine-tuning LLMs, ensuring that your models not only perform optimally but also maintain robustness and accuracy across diverse applications. This 2-month intensive course is predominantly hands-on, featuring extensive lab exercises, projects, quizzes, and case studies. It focuses on contemporary fine-tuning techniques that go beyond simply feeding more data into an LLM. We delve into sophisticated methods that prevent catastrophic forgetting and reduce hallucinations while achieving measurable improvements in domain-specific tasks. Course Overview The course is structured to guide you through a comprehensive journey of LLM fine-tuning, starting from foundational concepts to advanced techniques. The key topics covered include: Overview of the Journey: Understand the evolution of fine-tuning practices and the importance of these techniques in modern AI. This module will set the stage by providing a historical context and highlighting the significance of fine-tuning in achieving high-performance AI models. Performance Evaluations: Learn how to assess the performance of fine-tuned models, ensuring they meet desired benchmarks. This includes understanding various evaluation metrics, tools, and best practices for validating model improvements. Dangers of Fine-tuning: Catastrophic Forgetting and Hallucinations: Explore the risks associated with fine-tuning and methods to mitigate these issues. This topic covers the challenges of maintaining model integrity and reliability during and after fine-tuning. Parameter-Efficient Fine Tuning: Discover techniques for efficient fine-tuning that optimize model performance without excessive computational costs. Learn about methods like AdapterFusion, Prompt Tuning, and other parameter-efficient strategies. LoRA Family of Methods: Gain insights into the Low-Rank Adaptation methods for fine-tuning LLMs. This section delves into the specifics of LoRA techniques, their advantages, and implementation strategies. Fine-tuning Encoder Models: Focus on strategies for fine-tuning encoder-based models for improved feature extraction. Learn how to adapt these models to various tasks and enhance their performance through targeted fine-tuning. Fine-tuning LLMs: Dive deep into advanced techniques specific to large language models. This includes understanding the nuances of tuning models like GPT, BERT, and their derivatives. Fine-tuning Vision Models: Learn how to adapt vision models to specific tasks through fine-tuning. This topic covers techniques for enhancing models like CNNs and Vision Transformers for improved image recognition and analysis. Fine-tuning Multimodal Models: Explore the challenges and solutions for fine-tuning models that integrate multiple data types. Understand how to handle and optimize models that process both text and image data simultaneously. Fine-tuning vs RAG: Understand the differences and complementarities between fine-tuning and Retrieval-Augmented Generation (RAG) techniques. Learn how to effectively combine these approaches for enhanced model performance. Cloud-based Fine Tuning (Modal/Runpod): Utilize cloud platforms for scalable fine-tuning solutions. This section provides insights into leveraging cloud resources to perform large-scale fine-tuning efficiently. Data Augmentation and Synthetic Data Creation: Enhance your datasets with synthetic data to improve model training and robustness. Explore various techniques for generating and using synthetic data to augment your training process. Fine-tuning Tools: Unsloth, Axolotl, and Others: Get hands-on experience with cutting-edge tools that facilitate efficient fine-tuning processes. Learn how to use these tools to streamline and optimize your fine-tuning workflows. By the end of this course, you will have developed a robust understanding of fine-tuning techniques and their practical applications across various domains. The skills you acquire will empower you to refine AI models that deliver precise, reliable, and high-performance results in any enterprise setting. Join us on this journey to become proficient in one of the most critical aspects of modern AI development. We look forward to welcoming you to this challenging and rewarding course, where you will engage with complex concepts, collaborate on innovative projects, and emerge as a skilled professional in the field of LLM fine-tuning. Registration Reserve your enrollment now. By the end of the first week of the course, pay the rest of the tuition by Zelle or check. Financial Aid: A 50% discount for registrants from Asia or Africa. Installment payment plans are available. Reach out to us by email or phone to discuss and get approval. Special discount (25% to 100%) for people with disabilities. Special discount for veterans.

19 Lessons

Welcome to the Weekly Seminar at SupportVectors AI Lab, an invigorating space for professionals passionate about Artificial Intelligence! Our seminar, held both in-person at our Fremont, CA venue and online via Zoom, offers a unique opportunity for learning and discussion in the AI field. Here's what you can expect: Weekly Deep Dives into AI Research: Every Wednesday at 6 PM PST, we delve into the world of AI research, focusing on one or more papers that shed light on a specific topic. This isn't just a reading session; it's a journey through the intricacies of AI research. Expert Guidance: Each session is led by a knowledgeable discussion leader who meticulously explains the technical aspects of the paper. Complex mathematical methods and results are broken down to ensure they're easily graspable for most engineering professionals. Engaging Discussions: After the walkthrough, we open the floor for a vibrant discussion. Participants are encouraged to share their insights, ask questions, and explore the broader implications of the topic. This is a chance to voice your perspective and learn from others. A Community Endeavor: Our seminar is more than just a learning session; it's about building a community. We aim to foster a nurturing environment where professionals can collaboratively delve into the latest breakthroughs and developments in AI. Guest Speakers: Occasionally, we invite esteemed speakers who have made significant contributions to specific AI topics. Their insights add a valuable dimension to our understanding. Comfort and Networking: To make the experience more enjoyable, we serve food and refreshments. It's a great opportunity to network and discuss AI in a relaxed setting. Learning Beyond the Seminar: Missed a session or want to revisit the discussion? Our Learning Portal has got you covered with session recordings and summaries. It's a great resource to keep your knowledge fresh and updated. Test Your Understanding: After each session, we create a quiz related to the topic. This is an excellent tool for you to gauge your understanding and reinforce your learning. Join us at SupportVectors AI Lab's Weekly Seminar to deepen your AI knowledge, engage with fellow professionals, and be a part of a thriving learning community. We can't wait to see you there!

27 Lessons

This is   This four-session series provides an intuitive introduction to machine learning / AI. It is intended for those curious about AI and who want to learn about its core pillars intuitively.

9 Lessons

This free course is a gentle three-part introduction to data wrangling with Python. It is intended to introduce professionals working with data to critical methods and techniques in data wrangling. Kayalvizhi Thirumavalavan, a senior data scientist at SupportVectors, will be the main instructor.

8 Lessons

loader image