neuron-ai
The PHP Agentic Framework to build production-ready AI driven applications. Connect components (LLMs, vector DBs, memory) to agents that can interact with your data. With its modular architecture it's best suited for building RAG, multi-agent workflows, or business process automations.
Top Related Projects
The official Python library for the OpenAI API
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
An Open Source Machine Learning Framework for Everyone
Tensors and Dynamic neural networks in Python with strong GPU acceleration
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Quick Overview
Neuron-AI is an open-source project aimed at developing a modular and extensible artificial intelligence framework. It provides a set of tools and libraries for building and experimenting with various AI models and algorithms, with a focus on neural networks and deep learning.
Pros
- Modular architecture allowing for easy customization and extension
- Comprehensive documentation and examples for beginners and advanced users
- Active community support and regular updates
- Integration with popular data processing and visualization libraries
Cons
- Steeper learning curve compared to some other AI frameworks
- Limited pre-trained models available out of the box
- Performance may not be as optimized as some commercial alternatives
- Requires some background knowledge in AI and machine learning concepts
Code Examples
- Creating a simple neural network:
from neuron_ai import NeuralNetwork, Layer
model = NeuralNetwork()
model.add(Layer(64, activation='relu', input_shape=(784,)))
model.add(Layer(10, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy')
- Training the model:
from neuron_ai import DataLoader
data = DataLoader.load_mnist()
X_train, y_train = data['train']
X_test, y_test = data['test']
model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2)
- Making predictions:
import numpy as np
sample = X_test[0].reshape(1, -1)
prediction = model.predict(sample)
predicted_class = np.argmax(prediction)
print(f"Predicted class: {predicted_class}")
Getting Started
To get started with Neuron-AI, follow these steps:
-
Install the library using pip:
pip install neuron-ai -
Import the necessary modules:
from neuron_ai import NeuralNetwork, Layer, DataLoader -
Create and train a simple model:
model = NeuralNetwork() model.add(Layer(64, activation='relu', input_shape=(784,))) model.add(Layer(10, activation='softmax')) model.compile(optimizer='adam', loss='categorical_crossentropy') data = DataLoader.load_mnist() X_train, y_train = data['train'] model.fit(X_train, y_train, epochs=5, batch_size=32) -
Use the model for predictions:
sample = X_train[0].reshape(1, -1) prediction = model.predict(sample) print(f"Prediction: {prediction}")
Competitor Comparisons
The official Python library for the OpenAI API
Pros of openai-python
- Officially maintained by OpenAI, ensuring compatibility and up-to-date features
- Comprehensive documentation and extensive examples for various use cases
- Large community support and regular updates
Cons of openai-python
- Focused solely on OpenAI's services, limiting flexibility for other AI providers
- Requires an OpenAI API key, which may have associated costs
Code Comparison
openai-python:
import openai
openai.api_key = "your-api-key"
response = openai.Completion.create(engine="davinci", prompt="Hello, world!")
print(response.choices[0].text)
neuron-ai:
from neuron import Neuron
neuron = Neuron()
response = neuron.generate("Hello, world!")
print(response)
The openai-python library requires an API key and uses a more verbose syntax for generating completions. In contrast, neuron-ai appears to have a simpler interface for text generation, potentially making it easier for beginners to use. However, neuron-ai's documentation and feature set may be more limited compared to the officially supported openai-python library.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Pros of transformers
- Extensive library of pre-trained models for various NLP tasks
- Well-documented and actively maintained by a large community
- Seamless integration with popular deep learning frameworks
Cons of transformers
- Larger codebase and potentially steeper learning curve
- May include unnecessary features for simpler projects
- Higher computational requirements for some models
Code Comparison
transformers:
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")
neuron-ai:
from neuron import NeuronAI
ai = NeuronAI()
sentiment = ai.analyze_sentiment("I love this product!")
print(f"Sentiment: {sentiment}")
Summary
transformers offers a comprehensive suite of pre-trained models and tools for various NLP tasks, backed by a large community. It's ideal for complex projects requiring state-of-the-art performance. neuron-ai appears to be a simpler, more focused library, potentially easier to use for basic NLP tasks but with fewer advanced features. The choice between them depends on the specific project requirements and the user's familiarity with NLP concepts.
An Open Source Machine Learning Framework for Everyone
Pros of TensorFlow
- Extensive ecosystem with robust documentation and community support
- Highly scalable for large-scale machine learning projects
- Supports both high-level and low-level APIs for flexibility
Cons of TensorFlow
- Steeper learning curve for beginners
- Can be slower for prototyping compared to some alternatives
- Large framework size may be overkill for smaller projects
Code Comparison
TensorFlow:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
Neuron AI:
from neuron import Model, Dense
model = Model([
Dense(64, activation='relu'),
Dense(10, activation='softmax')
])
Summary
TensorFlow is a well-established, comprehensive framework for machine learning, offering extensive features and scalability. It's ideal for large-scale projects and research but may be complex for beginners. Neuron AI appears to be a more lightweight alternative, potentially easier for quick prototyping or smaller projects. However, it likely lacks the extensive ecosystem and community support of TensorFlow. The code comparison shows similar high-level API usage, with Neuron AI potentially offering a slightly more concise syntax.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pros of PyTorch
- Extensive community support and documentation
- Wide range of pre-built models and tools
- Seamless integration with Python ecosystem
Cons of PyTorch
- Steeper learning curve for beginners
- Higher computational requirements
- Less focus on specialized AI applications
Code Comparison
PyTorch:
import torch
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.add(x, y)
Neuron AI:
from neuron import Tensor
x = Tensor([1, 2, 3])
y = Tensor([4, 5, 6])
z = x + y
Key Differences
- PyTorch is a more established and comprehensive framework
- Neuron AI focuses on simplicity and ease of use
- PyTorch offers more advanced features and optimizations
- Neuron AI may be more suitable for specific AI applications
Use Cases
PyTorch:
- Large-scale deep learning projects
- Research and experimentation
- Production-ready AI systems
Neuron AI:
- Specialized AI applications
- Rapid prototyping
- Educational purposes
Community and Support
PyTorch:
- Large, active community
- Extensive third-party libraries
- Regular updates and improvements
Neuron AI:
- Smaller, focused community
- Limited third-party resources
- Potential for more personalized support
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- More mature and widely adopted project with extensive documentation
- Supports a broader range of optimization techniques for large-scale model training
- Actively maintained with frequent updates and contributions from the community
Cons of DeepSpeed
- Steeper learning curve due to its comprehensive feature set
- May introduce additional complexity for simpler use cases
- Requires more setup and configuration compared to lighter-weight alternatives
Code Comparison
DeepSpeed:
import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(
args=args,
model=model,
model_parameters=params
)
Neuron AI:
from neuron_ai import NeuronAI
model = NeuronAI(config)
optimizer = model.get_optimizer()
Key Differences
- DeepSpeed focuses on distributed training and optimization for large models
- Neuron AI appears to be a more specialized framework for specific AI applications
- DeepSpeed offers more fine-grained control over training optimizations
- Neuron AI likely provides a simpler API for quick implementation of AI models
Use Cases
- DeepSpeed: Large-scale model training, distributed computing environments
- Neuron AI: Specialized AI applications, potentially with a focus on ease of use
Community and Support
- DeepSpeed: Large community, extensive documentation, and active development
- Neuron AI: Smaller community, potentially more focused support for specific use cases
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Pros of JAX
- Mature, widely-used library with extensive documentation and community support
- Optimized for high-performance numerical computing and machine learning
- Seamless integration with NumPy and support for automatic differentiation
Cons of JAX
- Steeper learning curve, especially for those new to functional programming
- Limited support for dynamic graphs compared to some other ML frameworks
- May require more boilerplate code for certain tasks
Code Comparison
JAX example:
import jax.numpy as jnp
from jax import grad, jit
def f(x):
return jnp.sum(jnp.sin(x))
grad_f = jit(grad(f))
Neuron AI example:
from neuron import nn
model = nn.Sequential(
nn.Linear(10, 5),
nn.ReLU(),
nn.Linear(5, 1)
)
Note: The code examples are simplified and may not represent the full capabilities of either library. JAX focuses on numerical computing and automatic differentiation, while Neuron AI appears to be more oriented towards building neural network architectures.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Create Full-Featured Agentic Applications in PHP
Before moving on, support the community giving a GitHub star âï¸. Thank you!
What is Neuron?
Neuron is a PHP framework for creating and orchestrating AI Agents. It allows you to integrate AI entities in your existing PHP applications with a powerful and flexible architecture. We provide tools for the entire agentic application development lifecycle, from LLM interfaces, to data loading, to multi-agent orchestration, to monitoring and debugging. In addition, we provide tutorials and other educational content to help you get started using AI Agents in your projects.
Requirements
- PHP: ^8.1
Official documentation
Go to the official documentation
Guides & Tutorials
Check out the technical guides and tutorials archive to learn how to start creating your AI Agents with Neuron https://docs.neuron-ai.dev/overview/fast-learning-by-video.
How To
- Install
- Create an Agent
- Talk to the Agent
- Monitoring
- Supported LLM Providers
- Tools & Toolkits
- MCP Connector
- Structured Output
- RAG
- Workflow
- Official Documentation
Install
Install the latest version of the package:
composer require neuron-core/neuron-ai
Create an Agent
Neuron provides you with the Agent class you can extend to inherit the main features of the framework and create fully functional agents. This class automatically manages some advanced mechanisms for you, such as memory, tools and function calls, up to the RAG systems. You can go deeper into these aspects in the documentation.
Let's create an Agent with the command below:
php vendor/bin/neuron make:agent DataAnalystAgent
<?php
namespace App\Neuron;
use NeuronAI\Agent;
use NeuronAI\SystemPrompt;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
class DataAnalystAgent extends Agent
{
protected function provider(): AIProviderInterface
{
return new Anthropic(
key: 'ANTHROPIC_API_KEY',
model: 'ANTHROPIC_MODEL',
);
}
protected function instructions(): string
{
return (string) new SystemPrompt(
background: [
"You are a data analyst expert in creating reports from SQL databases."
]
);
}
}
The SystemPrompt class is designed to take your base instructions and build a consistent prompt for the underlying model
reducing the effort for prompt engineering.
Talk to the Agent
Send a prompt to the agent to get a response from the underlying LLM:
$agent = DataAnalystAgent::make();
$response = $agent->chat(
new UserMessage("Hi, I'm Valerio. Who are you?")
);
echo $response->getContent();
// I'm a data analyst. How can I help you today?
$response = $agent->chat(
new UserMessage("Do you remember my name?")
);
echo $response->getContent();
// Your name is Valerio, as you said in your introduction.
As you can see in the example above, the Agent automatically has memory of the ongoing conversation. Learn more about memory in the documentation.
Monitoring & Debugging
Integrating AI Agents into your application youâre not working only with functions and deterministic code, you program your agent also influencing probability distributions. Same input â output. That means reproducibility, versioning, and debugging become real problems.
Many of the Agents you build with Neuron will contain multiple steps with multiple invocations of LLM calls, tool usage, access to external memories, etc. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly your agent is doing and why.
Why is the model taking certain decisions? What data is the model reacting to? Prompting is not programming in the common sense. No static types, small changes break output, long prompts cost latency, and no two models behave exactly the same with the same prompt.
The best way to take your AI application under control is with Inspector. After you sign up,
make sure to set the INSPECTOR_INGESTION_KEY variable in the application environment file to start monitoring:
INSPECTOR_INGESTION_KEY=fwe45gtxxxxxxxxxxxxxxxxxxxxxxxxxxxx
After configuring the environment variable, you will see the agent execution timeline in your Inspector dashboard.

Learn more about Monitoring in the documentation.
Supported LLM Providers
With Neuron, you can switch between LLM providers with just one line of code, without any impact on your agent implementation. Supported providers:
- Anthropic
- OpenAI (also as an embeddings provider)
- OpenAI on Azure
- Ollama (also as an embeddings provider)
- OpenAILike
- Gemini (also as an embeddings provider)
- Mistral
- HuggingFace
- Deepseek
- Grok
- AWS Bedrock Runtime
Tools & Toolkits
Make your agent able to perform concrete tasks, like reading from a database, by adding tools or toolkits (collections of tools).
<?php
namespace App\Neuron;
use NeuronAI\Agent;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
use NeuronAI\SystemPrompt;
use NeuronAI\Tools\ToolProperty;
use NeuronAI\Tools\Tool;
use NeuronAI\Tools\Toolkits\MySQL\MySQLToolkit;
class DataAnalystAgent extends Agent
{
protected function provider(): AIProviderInterface
{
return new Anthropic(
key: 'ANTHROPIC_API_KEY',
model: 'ANTHROPIC_MODEL',
);
}
protected function instructions(): string
{
return (string) new SystemPrompt(
background: [
"You are a data analyst expert in creating reports from SQL databases."
]
);
}
protected function tools(): array
{
return [
MySQLToolkit::make(
\DB::connection()->getPdo()
),
];
}
}
Ask the agent something about your database:
$response = DataAnalystAgent::make()->chat(
new UserMessage("How many orders we received today?")
);
echo $response->getContent();
Learn more about Tools in the documentation.
MCP Connector
Instead of implementing tools manually, you can connect tools exposed by an MCP server with the McpConnector component:
<?php
namespace App\Neuron;
use NeuronAI\Agent;
use NeuronAI\MCP\McpConnector;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
use NeuronAI\Tools\ToolProperty;
use NeuronAI\Tools\Tool;
class DataAnalystAgent extends Agent
{
protected function provider(): AIProviderInterface
{
...
}
protected function instructions(): string
{
...
}
protected function tools(): array
{
return [
// Connect to an MCP server
...McpConnector::make([
'command' => 'npx',
'args' => ['-y', '@modelcontextprotocol/server-everything'],
])->tools(),
];
}
}
Learn more about MCP connector in the documentation.
Structured Output
There are scenarios where you need Agents to understand natural language, but output in a structured format, like business processes automation, data extraction, etc. to use the output with some other downstream system.
use App\Neuron\MyAgent;
use NeuronAI\Chat\Messages\UserMessage;
use NeuronAI\StructuredOutput\SchemaProperty;
/*
* Define the output structure as a PHP class.
*/
class Person
{
#[SchemaProperty(description: 'The user name')]
public string $name;
#[SchemaProperty(description: 'What the user love to eat')]
public string $preference;
}
// Talk to the agent requiring the structured output
$person = MyAgent::make()->structured(
new UserMessage("I'm John and I like pizza!"),
Person::class
);
echo $person->name ' like '.$person->preference;
// John like pizza
Learn more about Structured Output on the documentation.
RAG
To create a RAG you need to attach some additional components other than the AI provider, such as a vector store,
and an embeddings provider.
Let's create a RAG with the command below:
php vendor/bin/neuron make:rag MyChatBot
Here is an example of a RAG implementation:
<?php
namespace App\Neuron;
use NeuronAI\Providers\AIProviderInterface;
use NeuronAI\Providers\Anthropic\Anthropic;
use NeuronAI\RAG\Embeddings\EmbeddingsProviderInterface;
use NeuronAI\RAG\Embeddings\VoyageEmbeddingProvider;
use NeuronAI\RAG\RAG;
use NeuronAI\RAG\VectorStore\PineconeVectorStore;
use NeuronAI\RAG\VectorStore\VectorStoreInterface;
class MyChatBot extends RAG
{
protected function provider(): AIProviderInterface
{
return new Anthropic(
key: 'ANTHROPIC_API_KEY',
model: 'ANTHROPIC_MODEL',
);
}
protected function embeddings(): EmbeddingsProviderInterface
{
return new VoyageEmbeddingProvider(
key: 'VOYAGE_API_KEY',
model: 'VOYAGE_MODEL'
);
}
protected function vectorStore(): VectorStoreInterface
{
return new PineconeVectorStore(
key: 'PINECONE_API_KEY',
indexUrl: 'PINECONE_INDEX_URL'
);
}
}
Learn more about RAG in the documentation.
Workflow
Think of a Workflow as a smart flowchart for your AI applications. The idea behind Workflow is to allow developers to use all the Neuron components like AI providers, embeddings, data loaders, chat history, vector store, etc, as standalone components to create totally customized agentic entities.
Agent and RAG classes represent a ready to use implementation of the most common patterns when it comes to retrieval use cases, or tool calls, structured output, etc. Workflow allows you to program your agentic system completely from scratch. Agent and RAG can be used inside a Workflow to complete tasks as any other component if you need their built-in capabilities.
Neuron Workflow supports a robust human-in-the-loop pattern, enabling human intervention at any point in an automated process. This is especially useful in large language model (LLM)-driven applications where model output may require validation, correction, or additional context to complete the task.
Learn more about Workflow on the documentation.
Official documentation
Top Related Projects
The official Python library for the OpenAI API
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
An Open Source Machine Learning Framework for Everyone
Tensors and Dynamic neural networks in Python with strong GPU acceleration
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot
