下拉刷新

Here you can read past volumes of HelloGitHub Monthly by category. You are currently viewing the HelloGitHub AI collection.

Star 9.2w
Vol.111
19 hours ago

Google Gemini Command Line Tool.This project is the official open-source command line tool of Gemini, integrating the powerful capabilities of Google Gemini into the terminal environment. It is based on a million-level context and can understand the architecture and logic of large codebases. It supports multimodal input and output, Google search, and MCP and other functions.

gemini-cli
Star 1.8w
Vol.113
19 hours ago

AI Programming Assistant Task Management Board.This is a board tool specifically designed for AI programming agents, capable of uniformly managing mainstream AI programming assistants such as Claude Code, Gemini CLI, and Codex. It integrates board tasks, Git repositories, and AI programming agents, and supports multiple AI agents to automatically complete tasks such as bug fixing, feature development, project initialization, and documentation generation.

vibe-kanban
3
vllm
Star 6.8w
Vol.105
19 hours ago

More Efficient LLMs Inference and Service Engine. This is a highly efficient and user-friendly large language model inference engine, specifically designed to address issues such as slow inference speeds and low resource utilization. It is based on PyTorch and CUDA, and incorporates memory optimization algorithms (PagedAttention), computational graph optimization, and model parallelization techniques to significantly reduce GPU memory usage and fully leverage multi-GPU resources to enhance inference performance. At the same time, vLLM is seamlessly compatible with HF models. It supports efficient operation on a variety of hardware platforms such as GPUs, CPUs, and TPUs, suitable for real-time question answering, text generation, and recommendation systems.

from vllm import LLM

prompts = ["Hello, my name is", "The capital of France is"]  # Sample prompts.
llm = LLM(model="lmsys/vicuna-7b-v1.3")  # Create an LLM.
outputs = llm.generate(prompts)  # Generate texts from the prompts.
vllm
Star 5.9w
Vol.111
19 hours ago

Claude Coding Assistant in Terminal.This project is an AI coding assistant open-sourced by Claude officially. Integrated into the terminal, it can understand the entire codebase and help developers complete various coding tasks more efficiently through simple natural language commands.

claude-code
Star 14.4w
Vol.113
19 hours ago

Visual AI Workflow Building Platform.This is an open-source AI agent and workflow building platform designed for developers and enterprise users. It encapsulates the core capabilities of LangChain (chains, tools, memory, vector storage, etc.) into reusable components and combines React Flow to achieve visual process editing. Users can quickly design, debug and deploy complex AI workflows without writing code.

langflow
Star 9.4k
Vol.109
19 hours ago

Automatic Selection of Economical GPU Options for AI Training and Inference. This is an open-source cross-cloud AI and batch job scheduling platform that allows users to run deep learning, distributed training, inference, batch processing, and other tasks on Kubernetes, local clusters, and major cloud service providers (AWS, GCP, Azure, etc.) through a unified interface. It automatically searches for the most cost-effective and available GPU/TPU/CPU resources, supporting features such as queuing, automatic fault tolerance, resource sharing, and cost optimization.

skypilot
7
ollama
Star 16w
Vol.97
19 hours ago

Tool for Running Various Large Language Models Locally. This is a tool written in Go designed to install, launch, and manage large language models on a local machine with a single command. It supports models such as Llama 3, Gemma, Mistral, and is compatible with Windows, macOS, and Linux operating systems.

ollama
8
jax
Star 3.5w
Vol.90
19 hours ago

Google's High-Performance Scientific Computing Library. This is a Python library for numerical computation that combines Just-In-Time (JIT) compilation, automatic differentiation (Autograd), and a linear algebra compiler (XLA), with a usage pattern similar to NumPy. However, JAX is faster, more memory-efficient, and supports automatic differentiation, automatic vectorization, parallel computing, and more.

from jax import grad
import jax.numpy as jnp

def tanh(x):  # Define a function
  y = jnp.exp(-2.0 * x)
  return (1.0 - y) / (1.0 + y)

grad_tanh = grad(tanh)  # Obtain its gradient function
print(grad_tanh(1.0))   # Evaluate it at x = 1.0
# prints 0.4199743
# 自动求导
print(grad(grad(grad(tanh)))(1.0))
# prints 0.62162673
Star 3.1w
Vol.60
20 hours ago

一款小型的开源深度学习框架。它代码不足 1k 行足够简单,支持深度模型推理与训练。示例代码:

from tinygrad.tensor import Tensor
import tinygrad.optim as optim

class TinyBobNet:
  def __init__(self):
    self.l1 = Tensor.uniform(784, 128)
    self.l2 = Tensor.uniform(128, 10)

  def forward(self, x):
    return x.dot(self.l1).relu().dot(self.l2).logsoftmax()

model = TinyBobNet()
optim = optim.SGD([model.l1, model.l2], lr=0.001)

# ... and complete like pytorch, with (x,y) data

out = model.forward(x)
loss = out.mul(y).mean()
optim.zero_grad()
loss.backward()
optim.step()
tinygrad
Star 6w
Vol.44
20 hours ago

comma.ai 开源的自动驾驶系统

openpilot