下拉刷新

Here you can read past volumes of HelloGitHub Monthly by category. You are currently viewing the HelloGitHub AI collection.

Star 2.1w
Vol.108
2 hours ago

Microsoft's Open Source AI Agent Beginner's Tutorial. This project is a tutorial on AI intelligent agents (AI Agents) crafted by Microsoft specifically for beginners, divided into 10 courses, including detailed instructions, videos, and sample code.

ai-agents-for-beginners
Star 8.3k
Vol.102
2 hours ago

Tool Kit for Optimizing and Deploying Deep Learning Models. This project is an Intel open-source toolkit aimed at accelerating and optimizing the deployment of deep learning models. It helps developers deploy trained models to a variety of hardware platforms, supporting deep learning frameworks such as TensorFlow, PyTorch, and ONNX.

import openvino as ov import torch import torchvision # load PyTorch model into memory model = torch.hub.load("pytorch/vision", "shufflenet_v2_x1_0", weights="DEFAULT") # convert the model into OpenVINO model example = torch.randn(1, 3, 224, 224) ov_model = ov.convert_model(model, example_input=(example,)) # compile the model for CPU device core = ov.Core() compiled_model = core.compile_model(ov_model, 'CPU') # infer the model on random data output = compiled_model({0: example.numpy()})
openvino
Star 5.4w
Vol.44
2 hours ago

comma.ai 开源的自动驾驶系统

openpilot
Star 14.4w
Vol.34
2 hours ago

Google 神级语言表示模型的 PyTorch 预训练模型和 PyTorch 框架结合,使得更加容易上手。PyTorch 版本更方便小白上手实验。示例代码:

import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenized input text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.tokenize(text) # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 6 tokenized_text[masked_index] = '[MASK]' assert tokenized_text == ['who', 'was', 'jim', 'henson', '?', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer'] # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Define sentence A and B indices associated to 1st and 2nd sentences (see paper) segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids])
Star 2.9w
Vol.60
3 hours ago

一款小型的开源深度学习框架。它代码不足 1k 行足够简单,支持深度模型推理与训练。示例代码:

from tinygrad.tensor import Tensor import tinygrad.optim as optim class TinyBobNet: def __init__(self): self.l1 = Tensor.uniform(784, 128) self.l2 = Tensor.uniform(128, 10) def forward(self, x): return x.dot(self.l1).relu().dot(self.l2).logsoftmax() model = TinyBobNet() optim = optim.SGD([model.l1, model.l2], lr=0.001) # ... and complete like pytorch, with (x,y) data out = model.forward(x) loss = out.mul(y).mean() optim.zero_grad() loss.backward() optim.step()
tinygrad
6
djl
Star 4.5k
Vol.51
3 hours ago

亚马逊开源的一款基于 Java 语言的深度学习框架。对于 Java 开发者而言,可以在 Java 中开发及应用原生的机器学习和深度学习模型,同时简化了深度学习开发的难度。通过 DJL 提供直观的、高级的 API,Java 开发人员可以训练自己的模型,或者利用数据科学家用 Python 预先训练好的模型来进行推理。如果您恰好是对学习深度学习感兴趣的 Java 开发者,那么这个项目完全对口。运行效果如下:

djl
7
ollama
Star 14.1w
Vol.97
4 hours ago

Tool for Running Various Large Language Models Locally. This is a tool written in Go designed to install, launch, and manage large language models on a local machine with a single command. It supports models such as Llama 3, Gemma, Mistral, and is compatible with Windows, macOS, and Linux operating systems.

ollama
8
stanza
Star 7.5k
Vol.35
4 hours ago

适用于多种人类语言的 Stanford NLP 官方 Python 库。包含用于运行 CoNLL 2018 共享任务的最新完全神经管道以及访问 Java Stanford CoreNLP 服务器的软件包。实例代码:

import stanfordnlp stanfordnlp.download('en') # This downloads the English models for the neural pipeline nlp = stanfordnlp.Pipeline() # This sets up a default neural pipeline in English doc = nlp("Barack Obama was born in Hawaii. He was elected president in 2008.") doc.sentences[0].print_dependencies()
9
k8sgpt
Star 6.6k
Vol.101
4 hours ago

Kubernetes Troubleshooting AI Assistant. This project utilizes Large Language Models (LLM) to automatically analyze issues in Kubernetes clusters and provide troubleshooting and optimization suggestions. It generates reliable diagnostic reports by reading the cluster's status data and configurations.

k8sgpt
Star 4.2w
Vol.101
5 hours ago

Data Framework for Large Language Models. This project is a data framework designed specifically for Large Language Model (LLM) applications, helping developers easily integrate private data with LLMs. It offers data connectors that support the creation of indexes from various data sources such as APIs, PDFs, documents, and SQL databases, simplifying the data import and query operations to the point where beginners can enhance the context of LLMs with just a few lines of code.

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine() response = query_engine.query("Some question about the data should go here") print(response)