tracecat
All-in-one AI automation platform (workflows, agents, cases, tables) for security, IT, and infra teams.
Top Related Projects
🦜🔗 The platform for reliable agents.
Integrate cutting-edge LLM technology quickly and easily into your apps
Examples and guides for using the OpenAI API
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Quick Overview
Tracecat is an open-source observability pipeline that allows users to build, test, and deploy data workflows. It provides a visual interface for creating and managing data pipelines, with a focus on observability and monitoring tasks. Tracecat aims to simplify the process of handling complex data flows and integrations.
Pros
- Visual workflow builder for easy pipeline creation
- Supports various data sources and integrations
- Open-source and customizable
- Designed with observability and monitoring in mind
Cons
- Relatively new project, may have limited community support
- Documentation could be more comprehensive
- May require additional setup for certain integrations
- Learning curve for users new to data pipeline concepts
Code Examples
# Example 1: Creating a simple workflow
from tracecat import Workflow
workflow = Workflow("My Workflow")
source = workflow.add_node("HTTPSource", url="https://api.example.com/data")
transform = workflow.add_node("JSONTransform", path="$.data")
sink = workflow.add_node("ElasticsearchSink", index="my_index")
workflow.connect(source, transform)
workflow.connect(transform, sink)
# Example 2: Adding a custom function to a workflow
from tracecat import Workflow, FunctionNode
def my_custom_function(data):
# Process data
return processed_data
workflow = Workflow("Custom Function Workflow")
source = workflow.add_node("KafkaSource", topic="input_topic")
custom_node = workflow.add_node(FunctionNode(my_custom_function))
sink = workflow.add_node("S3Sink", bucket="output_bucket")
workflow.connect(source, custom_node)
workflow.connect(custom_node, sink)
# Example 3: Using conditionals in a workflow
from tracecat import Workflow, Condition
workflow = Workflow("Conditional Workflow")
source = workflow.add_node("PrometheusSource", query="metric_name")
condition = workflow.add_node(Condition("value > 100"))
alert = workflow.add_node("AlertManagerSink", severity="high")
log = workflow.add_node("LogSink", path="/var/log/app.log")
workflow.connect(source, condition)
workflow.connect(condition, alert, condition=True)
workflow.connect(condition, log, condition=False)
Getting Started
To get started with Tracecat, follow these steps:
-
Install Tracecat:
pip install tracecat -
Create a new workflow:
from tracecat import Workflow workflow = Workflow("My First Workflow") -
Add nodes and connections to your workflow:
source = workflow.add_node("HTTPSource", url="https://api.example.com/data") sink = workflow.add_node("ConsoleSink") workflow.connect(source, sink) -
Run the workflow:
workflow.run()
For more detailed instructions and advanced usage, refer to the Tracecat documentation.
Competitor Comparisons
🦜🔗 The platform for reliable agents.
Pros of langchain
- Extensive ecosystem with a wide range of integrations and tools
- Well-documented with comprehensive guides and examples
- Large and active community support
Cons of langchain
- Can be complex and overwhelming for beginners
- Requires more setup and configuration for basic tasks
- May have performance overhead due to its extensive feature set
Code Comparison
langchain:
from langchain import OpenAI, LLMChain, PromptTemplate
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(input_variables=["product"], template="What is a good name for a company that makes {product}?")
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run("colorful socks"))
tracecat:
from tracecat import Tracecat
tc = Tracecat()
workflow = tc.workflow("example")
llm_node = workflow.add_node("llm", "What is a good name for a company that makes colorful socks?")
workflow.run()
print(llm_node.output)
The code comparison shows that langchain requires more setup but offers more flexibility, while tracecat provides a more streamlined approach with its workflow-based structure. langchain's example demonstrates its modular nature, allowing for easy customization of prompts and models. tracecat's example showcases its simplicity and built-in workflow management.
Integrate cutting-edge LLM technology quickly and easily into your apps
Pros of semantic-kernel
- More comprehensive and mature project with extensive documentation
- Broader language support (C#, Python, Java) for wider developer adoption
- Stronger integration with Azure AI services and other Microsoft technologies
Cons of semantic-kernel
- Steeper learning curve due to its more complex architecture
- Potentially higher resource requirements for deployment and operation
- More tightly coupled with Microsoft's ecosystem, which may limit flexibility
Code Comparison
semantic-kernel:
kernel = sk.Kernel()
kernel.add_text_completion_service("dv", OpenAITextCompletion("text-davinci-003"))
prompt = kernel.create_semantic_function("Write a poem about {{$input}}")
result = await kernel.run_async(prompt, input_str="AI")
print(result)
tracecat:
from tracecat import Tracecat
tc = Tracecat()
workflow = tc.create_workflow("poem_generator")
workflow.add_node("generate_poem", "Write a poem about {{input}}")
result = tc.run_workflow(workflow, input="AI")
print(result)
Both projects aim to simplify AI integration, but semantic-kernel offers a more comprehensive framework with deeper Microsoft ecosystem integration, while tracecat focuses on workflow automation with a potentially simpler learning curve.
Examples and guides for using the OpenAI API
Pros of openai-cookbook
- Comprehensive collection of examples and best practices for using OpenAI's APIs
- Well-maintained and regularly updated by OpenAI's team
- Covers a wide range of use cases and applications
Cons of openai-cookbook
- Focused solely on OpenAI's products, limiting its scope
- May not provide as much flexibility for custom workflows or integrations
- Less emphasis on end-to-end automation and orchestration
Code Comparison
openai-cookbook:
import openai
response = openai.Completion.create(
engine="text-davinci-002",
prompt="Translate the following English text to French: '{}'",
max_tokens=60
)
tracecat:
from tracecat import Tracecat
tc = Tracecat()
workflow = tc.create_workflow("Translation")
workflow.add_node("Translate", "openai_translate", {
"text": "input_text",
"target_language": "French"
})
The openai-cookbook example demonstrates direct API usage, while tracecat showcases a more abstracted workflow-based approach for integrating AI capabilities into larger systems.
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
Pros of promptflow
- More extensive documentation and examples
- Integrated with Azure AI services for seamless deployment
- Larger community and corporate backing from Microsoft
Cons of promptflow
- Steeper learning curve due to more complex features
- Potentially higher costs when using Azure services
- Less focus on local development and testing
Code Comparison
tracecat:
from tracecat import Workflow
workflow = Workflow()
workflow.add_node("start", "input", {"text": "Hello, world!"})
workflow.add_node("output", "print", {})
workflow.add_edge("start", "output")
promptflow:
from promptflow import PFClient
client = PFClient()
flow = client.flows.create_or_update(
flow="hello_world",
inputs={"text": "Hello, world!"},
outputs={"result": "${output}"}
)
Summary
Both tracecat and promptflow offer workflow management for AI and ML tasks. Promptflow provides more extensive features and Azure integration, making it suitable for large-scale enterprise deployments. However, this comes with increased complexity and potential costs. Tracecat offers a simpler, more lightweight approach that may be preferable for smaller projects or local development. The choice between the two depends on the specific needs of the project and the development environment.
AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Pros of Haystack
- More mature and widely adopted project with a larger community
- Comprehensive documentation and extensive examples
- Broader range of features for NLP tasks beyond just question answering
Cons of Haystack
- Steeper learning curve due to its extensive feature set
- Heavier resource requirements for deployment and operation
- Less focused on specific use cases, which may lead to unnecessary complexity for simpler projects
Code Comparison
Tracecat (Node.js):
const tracecat = require('tracecat');
const result = await tracecat.process(input);
console.log(result);
Haystack (Python):
from haystack import Pipeline
pipe = Pipeline()
pipe.add_node(component=retriever, name="Retriever", inputs=["Query"])
result = pipe.run(query="Your question here")
print(result)
Both repositories aim to simplify natural language processing tasks, but they differ in their approach and target audience. Tracecat focuses on providing a streamlined solution for specific use cases, while Haystack offers a more comprehensive toolkit for various NLP tasks. The code comparison shows that Tracecat has a simpler API, while Haystack requires more setup but offers greater flexibility.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Tracecat is a modern, open source automation platform built for security and IT engineers. Simple YAML-based templates for integrations with a no-code UI for workflows. Built-in lookup tables and case management. Orchestrated using Temporal for scale and reliability.

Getting Started
[!IMPORTANT] Tracecat is in active development. Expect breaking changes with releases. Review the release changelog before updating.
Run Tracecat locally
Deploy a local Tracecat stack using Docker Compose. View full instructions here.
Run Tracecat on AWS Fargate
For advanced users: deploy a production-ready Tracecat stack on AWS Fargate using Terraform. View full instructions here.
Run Tracecat on Kubernetes
Coming soon.
Community
Have questions? Feedback? New integration ideas? Come hang out with us in the Tracecat Community Discord.
Tracecat Registry
Tracecat Registry is a collection of integration and response-as-code templates.
Response actions are organized into Tracecat's own ontology of common capabilities (e.g. list_alerts, list_cases, list_users).
Template inputs (e.g. start_time, end_time) are normalized to fit the Open Cyber Security Schema (OCSF) ontology where possible.
Examples
Visit our documentation on Tracecat Registry for use cases and ideas. Or check out existing open source templates in our repo.
Open Source vs Enterprise
This repo is available under the AGPL-3.0 license with the exception of the ee directory. The ee directory contains paid enterprise features requiring a Tracecat Enterprise license.
The purpose of the Enterprise Edition is to provide additional and powerful features which require specific investments in research and development.
You can enable the Enterprise Edition directly in the settings of the platform.
If you are interested in Tracecat's Enterprise self-hosted or managed Cloud offering, check out our website or book a meeting with us.
Security
SSO, audit logs, and IaaC deployments (Terraform, Kubernetes / Helm) will always be free and available. We're working on a comprehensive list of Tracecat's threat model, security features, and hardening recommendations. For immediate answers to these questions, please reach to us on Discord.
Please report any security issues to security@tracecat.com and include tracecat in the subject line.
Contributors
Thank you all our amazing contributors for contributing code, integrations, and support. Open source is only possible because of you. â¤ï¸
Tracecat is distributed under AGPL-3.0
Top Related Projects
🦜🔗 The platform for reliable agents.
Integrate cutting-edge LLM technology quickly and easily into your apps
Examples and guides for using the OpenAI API
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot