下拉刷新

Here you can read past volumes of HelloGitHub Monthly by category. You are currently viewing the HelloGitHub AI collection.

Star 4w
Vol.103
8 hours ago

Advanced Object Detection and Tracking Model. This project is based on previous YOLO versions, with added features and improvements to the model, showing excellent performance in tasks such as object detection, tracking, instance segmentation, and image classification.

from ultralytics import YOLO # Load a model model = YOLO("yolo11n.pt") # Train the model train_results = model.train( data="coco8.yaml", # path to dataset YAML epochs=100, # number of training epochs imgsz=640, # training image size device="cpu", # device to run on, i.e. device=0 or device=0,1,2,3 or device=cpu ) # Evaluate model performance on the validation set metrics = model.val() # Perform object detection on an image results = model("path/to/image.jpg") results[0].show() # Export the model to ONNX format path = model.export(format="onnx") # return path to exported model
ultralytics
2
jax
Star 3.2w
Vol.90
8 hours ago

Google's High-Performance Scientific Computing Library. This is a Python library for numerical computation that combines Just-In-Time (JIT) compilation, automatic differentiation (Autograd), and a linear algebra compiler (XLA), with a usage pattern similar to NumPy. However, JAX is faster, more memory-efficient, and supports automatic differentiation, automatic vectorization, parallel computing, and more.

from jax import grad import jax.numpy as jnp def tanh(x): # Define a function y = jnp.exp(-2.0 * x) return (1.0 - y) / (1.0 + y) grad_tanh = grad(tanh) # Obtain its gradient function print(grad_tanh(1.0)) # Evaluate it at x = 1.0 # prints 0.4199743 # 自动求导 print(grad(grad(grad(tanh)))(1.0)) # prints 0.62162673
Star 8.2k
Vol.102
8 hours ago

Tool Kit for Optimizing and Deploying Deep Learning Models. This project is an Intel open-source toolkit aimed at accelerating and optimizing the deployment of deep learning models. It helps developers deploy trained models to a variety of hardware platforms, supporting deep learning frameworks such as TensorFlow, PyTorch, and ONNX.

import openvino as ov import torch import torchvision # load PyTorch model into memory model = torch.hub.load("pytorch/vision", "shufflenet_v2_x1_0", weights="DEFAULT") # convert the model into OpenVINO model example = torch.randn(1, 3, 224, 224) ov_model = ov.convert_model(model, example_input=(example,)) # compile the model for CPU device core = ov.Core() compiled_model = core.compile_model(ov_model, 'CPU') # infer the model on random data output = compiled_model({0: example.numpy()})
openvino
4
wandb
Star 9.8k
Vol.81
8 hours ago

一款轻量级的机器学习可视化工具。该项目是用于可视化和跟踪机器学习实验的工具,通过几行代码就可以实现跟踪、比较和可视化机器学习实验。

import wandb # 1. Start a W&B run wandb.init(project="gpt3") # 2. Save model inputs and hyperparameters config = wandb.config config.learning_rate = 0.01 # Model training code here ... # 3. Log metrics over time to visualize performance for i in range(10): wandb.log({"loss": loss})
wandb
Star 2.1w
Vol.98
8 hours ago

Tool for Simplified API Calls of Large Models. This project enables the unification of various AI large model and service interfaces into the OpenAI format, thereby simplifying the management and switching tasks between different AI services or large models. Additionally, it supports functions such as setting budgets, restricting request frequencies, managing API Keys, and configuring OpenAI proxy servers.

from litellm import completion import os ## set ENV variables os.environ["OPENAI_API_KEY"] = "your-openai-key" os.environ["COHERE_API_KEY"] = "your-cohere-key" messages = [{ "content": "Hello, how are you?","role": "user"}] # openai call response = completion(model="gpt-3.5-turbo", messages=messages) # cohere call response = completion(model="command-nightly", messages=messages) print(response)
litellm
6
vllm
Star 4.6w
Vol.105
8 hours ago

More Efficient LLMs Inference and Service Engine. This is a highly efficient and user-friendly large language model inference engine, specifically designed to address issues such as slow inference speeds and low resource utilization. It is based on PyTorch and CUDA, and incorporates memory optimization algorithms (PagedAttention), computational graph optimization, and model parallelization techniques to significantly reduce GPU memory usage and fully leverage multi-GPU resources to enhance inference performance. At the same time, vLLM is seamlessly compatible with HF models. It supports efficient operation on a variety of hardware platforms such as GPUs, CPUs, and TPUs, suitable for real-time question answering, text generation, and recommendation systems.

from vllm import LLM prompts = ["Hello, my name is", "The capital of France is"] # Sample prompts. llm = LLM(model="lmsys/vicuna-7b-v1.3") # Create an LLM. outputs = llm.generate(prompts) # Generate texts from the prompts.
vllm
Star 2.9w
Vol.60
8 hours ago

一款小型的开源深度学习框架。它代码不足 1k 行足够简单,支持深度模型推理与训练。示例代码:

from tinygrad.tensor import Tensor import tinygrad.optim as optim class TinyBobNet: def __init__(self): self.l1 = Tensor.uniform(784, 128) self.l2 = Tensor.uniform(128, 10) def forward(self, x): return x.dot(self.l1).relu().dot(self.l2).logsoftmax() model = TinyBobNet() optim = optim.SGD([model.l1, model.l2], lr=0.001) # ... and complete like pytorch, with (x,y) data out = model.forward(x) loss = out.mul(y).mean() optim.zero_grad() loss.backward() optim.step()
tinygrad
Star 5.3w
Vol.44
8 hours ago

comma.ai 开源的自动驾驶系统

openpilot
Star 2.6w
Vol.93
8 hours ago

Computer Vision AI Toolkit. This project simplifies the development process of computer vision tasks such as object detection, classification, annotation, and tracking. Developers only need to load datasets and models to easily perform detections on images and videos, count the number of detections in certain areas, and other operations.

import cv2 import supervision as sv from ultralytics import YOLO image = cv2.imread(...) model = YOLO('yolov8s.pt') result = model(image)[0] detections = sv.Detections.from_ultralytics(result) print(len(detections)) # 5
supervision
Star 7.9w
Vol.84
8 hours ago

Running LLaMA Big Models on Notebooks. This project achieves smooth operation of LLaMA models on the CPU, supporting macOS, Linux, and Windows operating systems.

llama.cpp