/dev/reading
Category

AI and Machine Learning

107 books, 4 subcategories
Order by
View
Geometry-Based Machine Learning and Data Analysis in R
by Colleen M. Farrelly and Yaé Ulrich Gaba

Whether you’re a mathematician, seasoned data scientist, or marketing professional, you’ll find The Shape of Data to be the perfect introduction to the critical interplay between the geometry of data structures and machine learning.

This book’s extensive collection of case studies (drawn from medicine, education, sociology, linguistics, and more) and gentle explanations of the math behind dozens of algorithms provide a comprehensive yet accessible look at how geometry shapes the algorithms that drive data analysis.

In addition to gaining a deeper understanding of how to implement geometry-based algorithms with code, you’ll explore:

  • Supervised and unsupervised learning algorithms and their application to network data analysis
  • The way distance metrics and dimensionality reduction impact machine learning
  • How to visualize, embed, and analyze survey and text data with topology-based algorithms
  • New approaches to computational solutions, including distributed computing and quantum algorithms
by Paul Azunre

Build custom NLP models in record time by adapting pre-trained machine learning models to solve specialized problems.

In Transfer Learning for Natural Language Processing you will learn:

  • Fine tuning pretrained models with new domain data
  • Picking the right model to reduce resource usage
  • Transfer learning for neural network architectures
  • Generating text with generative pretrained transformers
  • Cross-lingual transfer learning with BERT
  • Foundations for exploring NLP academic literature

Training deep learning NLP models from scratch is costly, time-consuming, and requires massive amounts of data. In

Transfer Learning for Natural Language Processing, DARPA researcher Paul Azunre reveals cutting-edge transfer learning techniques that apply customizable pretrained models to your own NLP architectures. You’ll learn how to use transfer learning to deliver state-of-the-art results for language comprehension, even when working with limited label data. Best of all, you’ll save on training time and computational costs.

Explore Generative AI and Large Language Models with Hugging Face, ChatGPT, GPT4-V, and DALL-E 3
by Denis Rothman

Transformers for Natural Language Processing and Computer Vision, Third Edition, explores Large Language Model (LLM) architectures, applications, and various platforms (Hugging Face, OpenAI, and Google Vertex AI) used for Natural Language Processing (NLP) and Computer Vision (CV).

The book guides you through different transformer architectures to the latest Foundation Models and Generative AI. You’ll pretrain and fine-tune LLMs and work through different use cases, from summarization to implementing question-answering systems with embedding-based search techniques. You will also learn the risks of LLMs, from hallucinations and memorization to privacy, and how to mitigate such risks using moderation models with rule and knowledge bases. You’ll implement Retrieval Augmented Generation (RAG) with LLMs to improve the accuracy of your models and gain greater control over LLM outputs.

Dive into generative vision transformers and multimodal model architectures and build applications, such as image and video-to-text classifiers. Go further by combining different models and platforms and learning about AI agent replication.

This book provides you with an understanding of transformer architectures, pretraining, fine-tuning, LLM use cases, and best practices.

What you will learn

  • Learn how to pretrain and fine-tune LLMs
  • Learn how to work with multiple platforms, such as Hugging Face, OpenAI, and Google Vertex AI
  • Learn about different tokenizers and the best practices for preprocessing language data
  • Implement Retrieval Augmented Generation and rules bases to mitigate hallucinations
  • Visualize transformer model activity for deeper insights using BertViz, LIME, and SHAP
  • Create and implement cross-platform chained models, such as HuggingGPT
  • Go in-depth into vision transformers with CLIP, DALL-E 2, DALL-E 3, and GPT-4V

Who this book is for

This book is ideal for NLP and CV engineers, software developers, data scientists, machine learning engineers, and technical leaders looking to advance their LLMs and generative AI skills or explore the latest trends in the field. Knowledge of Python and machine learning concepts is required to fully understand the use cases and code examples. However, with examples using LLM user interfaces, prompt engineering, and no-code model building, this book is great for anyone curious about the AI revolution.

Learning Their Underlying Concepts and Technologies
by Thimira Amaratunga

This book will teach you the underlying concepts of large language models (LLMs), as well as the technologies associated with them.

The book starts with an introduction to the rise of conversational AIs such as ChatGPT, and how they are related to the broader spectrum of large language models. From there, you will learn about natural language processing (NLP), its core concepts, and how it has led to the rise of LLMs. Next, you will gain insight into transformers and how their characteristics, such as self-attention, enhance the capabilities of language modeling, along with the unique capabilities of LLMs. The book concludes with an exploration of the architectures of various LLMs and the opportunities presented by their ever-increasing capabilities—as well as the dangers of their misuse.

After completing this book, you will have a thorough understanding of LLMs and will be ready to take your first steps in implementing them into your own projects.

What You Will Learn

  • Grasp the underlying concepts of LLMs
  • Gain insight into how the concepts and approaches of NLP have evolved over the years
  • Understand transformer models and attention mechanisms
  • Explore different types of LLMs and their applications
  • Understand the architectures of popular LLMs
  • Delve into misconceptions and concerns about LLMs, as well as how to best utilize them

Who This Book Is For

Anyone interested in learning the foundational concepts of NLP, LLMs, and recent advancements of deep learning

by Gilbert Mizrahi

Unlocking the Secrets of Prompt Engineering is your key to mastering the art of AI-driven writing. This book propels you into the world of large language models (LLMs), empowering you to create and apply prompts effectively for diverse applications, from revolutionizing content creation and chatbots to coding assistance.

Starting with the fundamentals of prompt engineering, this guide provides a solid foundation in LLM prompts, their components, and applications. Through practical examples and use cases, you'll discover how LLMs can be used for generating product descriptions, personalized emails, social media posts, and even creative writing projects like fiction and poetry. The book covers advanced use cases such as creating and promoting podcasts, integrating LLMs with other tools, and using AI for chatbot development. But that’s not all. You'll also delve into the ethical considerations, best practices, and limitations of using LLM prompts as you experiment and optimize your approach for best results.

By the end of this book, you'll have unlocked the full potential of AI in writing and content creation to generate ideas, overcome writer's block, boost productivity, and improve communication skills.

What you will learn

  • Explore the different types of prompts, their strengths, and weaknesses
  • Understand the AI agent's knowledge and mental model
  • Enhance your creative writing with AI insights for fiction and poetry
  • Develop advanced skills in AI chatbot creation and deployment
  • Discover how AI will transform industries such as education, legal, and others
  • Integrate LLMs with various tools to boost productivity
  • Understand AI ethics and best practices, and navigate limitations effectively
  • Experiment and optimize AI techniques for best results

Who this book is for

This book is for a wide audience, including writers, marketing and business professionals, researchers, students, tech enthusiasts, and creative individuals. Anyone looking for strategies and examples for using AI co-writing tools like ChatGPT effectively in domains such as content creation, drafting emails, and inspiring artistic works, will find this book especially useful. If you are interested in AI, NLP, and innovative software for personal or professional use, this is the book for you.

A Human Guide to Artificial Intelligence
by David L. Shrier

Artificial intelligence is driving workforce disruption on a scale not seen since the Industrial Revolution.

In schools and universities AI technology has forced a reevaluation of the way students are taught and assessed. Meanwhile, ChatGPT has become a cultural phenomenon, reaching a hundred million users and attracting a reputed $1 trillion investor interest in its parent company, OpenAI.

The race to dominate the generative AI market is accelerating at breakneck speed, inspiring breathless headlines and immense public interest.

Welcome to AI provides a rare view into a frontier area of computer science that will change everything about how you live and work. Read this book and better understand how to succeed in the AI-enabled future.