Introducing Zeta, the All-New AI Framework to Effortlessly Build the Best LLMS.

date
Aug 30, 2023
slug
zeta-create-best-llms-ever
status
Published
tags
Research
summary
Announcing Zeta, the framework that enables you to easily create ultra-powerful Multi-Modality Foundation Models
type
Post
notion image
Developers strive to create deep learning models that understand the nuances of the real world — a space that’s multifaceted, dynamic, and inherently multi-modal. They’ve always envisioned models that could digest a plethora of information, be it textual, visual, or auditory, and make sense of it just as humans do.
In an ideal future, achieving such multi-modality shouldn’t require countless lines of code, endless troubleshooting, and sacrificing efficiency.
But alas, this future is hindered because there’s been no framework versatile enough, powerful enough, and intuitive enough to transform this vision into reality.
This gap isn’t for lack of trying. Previous attempts have been bogged down by monolithic architectures, a lack of modularity, and inflexible design principles.
But imagine the horizon where this gap is bridged — it’s a realm of endless possibilities and performance.
The spoils of this new frontier?
Enhanced user experiences, richer application capabilities, and a genuine semblance of understanding by our machine counterparts.
Our secret weapon to bridge this divide is Zeta — a groundbreaking ML framework designed from the ground up with the developer in mind, baked with the essence of modularity, reliability, use-ability, and unparalleled speed.
Why trust us? Our commitment to these tenets isn’t just lip service; it’s backed by state-of-the-art implementations, rigorous testing, and a design philosophy that puts developers first.
Announcing Zeta, the framework that enables you to easily create ultra-powerful Multi-Modality Foundation Models

Overview of Zeta

notion image
Zeta is not just another ML framework; it’s the evolution of deep learning tools. While providing the capability to construct State of The Art Models effortlessly, its true prowess lies in its design tenets:
  1. Use-Ability: Every interaction with Zeta is crafted to be fluid. Just like how a swim in the ocean feels boundless and refreshing, Zeta offers pythonic methods and classes with intuitive error handling that guides you on what steps to take next.
  1. Reliability: Wasting computational resources isn’t in Zeta’s dictionary. By ensuring every FLOP is utilized optimally, Zeta boasts of ultra-reliable and high-performance designs for all its functions and classes.
  1. Speed: Labelling Zeta as fast is an understatement. It’s built for those who crave speed in their ML workflows, making it the Lamborghini of ML frameworks.

Deep Dive into Zeta’s Capabilities

Quick Start with Zeta
Before diving deep, let’s get you started:
pip3 install zeta
And, click here for the documentation:

Unleashing the Power of Attention Mechanisms

notion image
Attention mechanisms have transformed the landscape of deep learning. With Zeta, you are no longer bound by traditional implementations. Experience the might of FlashAttention and MultiQueryAttention
import torch
from zeta import FlashAttention

q = torch.randn(2, 4, 6, 8)
k = torch.randn(2, 4, 10, 8)
v = torch.randn(2, 4, 10, 8)
attention = FlashAttention(causal=False, dropout=0.1, flash=False)
output = attention(q, k, v)
print(output.shape)
And, here is the documentation for FlashAttention:

The Marvel of GPT-4

notion image
While attention mechanisms are revolutionary, combining them with neural architectures like GPT-4 unleashes possibilities like never before. Zeta doesn’t just stop at attention; it lets you tap into the prowess of models like GPT-4.
import torch
from zeta import GPT4

text = torch.randint(0, 256, (1, 1024)).cuda()

model = GPT4()

model(text)
import torch
from zeta import GPT4MultiModal

text = torch.randint(0, 256, (1, 1024)).cuda()
img = torch.randn(1, 3, 256, 256)p

# Multi-modal GPT4
model = GPT4MultiModal()
model(text, img)

Transformer Documentation

notion image
The Transformer class is Zeta’s crown jewel. It is a testament to Zeta’s commitment to flexibility, power, and simplicity. Designed to provide a comprehensive yet flexible interface for transformer-based models, this class is a one-stop solution for various sequence-to-sequence tasks.
The Transformer class in the Zeta library is a versatile deep learning architecture that combines attention mechanisms with feedforward neural networks for various natural language processing tasks, such as language modeling, machine translation, and text generation. The Transformer architecture was introduced in the paper "Attention is All You Need" by Vaswani et al.
The main purpose of the Transformer class is to provide a flexible and configurable interface for creating transformer-based models for sequence-to-sequence tasks. The class allows users to specify the number of tokens, maximum sequence length, attention layers, embeddings, and other parameters necessary for creating and training transformer models.
The Transformer class supports both autoregressive and non-autoregressive training settings and includes features such as relative positional biases, rotary positional embeddings, memory tokens, and more.
from zeta import Transformer

# Example 1: Basic Usage
transformer = Transformer(
    num_tokens=10000,
    max_seq_len=256,
    attn_layers=attn_layers_instance,
    embedding_provider=embedding_provider_instance
)
logits = transformer(input_tokens)

# Example 2: Return Embeddings
embeddings = transformer(input_tokens, return_embeddings=True)

# Example 3: Return Intermediate Attention Maps
logits, attn_maps = transformer(input_tokens, return_attn=True)
For an in-depth understanding of Zeta’s Transformer class, refer to the Transformer Documentation section.

Roadmap

notion image
Our journey with Zeta is a commitment to continuous improvement and evolution:
  1. Enhanced Multi-Modality: Integration of more diverse data types, making it easier to train models that understand textual, visual, auditory, and sensor-based data.
  1. Optimized Pre-trained Models: Providing a suite of pre-trained models across various domains, ensuring you have a starting point regardless of your application.
  1. Developer Community: Building a robust community where developers can share custom modules, tips, and best practices.
  1. Expanded Toolkit: From visualization tools to debugging suites, we aim to make Zeta the comprehensive toolbelt every ML developer deserves.

Conclusion

notion image
In a world that’s moving at breakneck speed, Zeta is your jetpack.
It’s more than just a framework; it’s a philosophy, a commitment, and a vision.
A vision where creating multi-modality models isn’t a chore, but a joy.
Join us on this exciting journey, be part of the Zeta revolution, and let’s shape the future of machine learning together.
Join Agora, the open source research lab behind Zeta!

© APAC AI 2022 - 2024