AI integration framework for Flarum with text generation, embeddings, and moderation. Multi-provider support (OpenAI, Gemini, Anthropic) with extensible architecture.
- 🤖 Text generation with streaming support
- 🔍 Vector embeddings for semantic search
- 🛡️ AI-powered content moderation
- 🔌 Multi-provider architecture (OpenAI, Gemini, Anthropic)
- ⚡ SSE streaming for real-time responses
- 🔧 Extensible provider system
composer require datlechin/flarum-ai- Navigate to Admin Panel → Extensions → AI
- Select your LLM provider (OpenAI, Gemini, or Anthropic)
- Enter your API key
- Configure model settings
use Datlechin\Ai\Providers\HttpProviderFactory;
// Get the provider instance
$provider = app(HttpProviderFactory::class)->createLlmProvider();
// Generate text
$messages = [
['role' => 'system', 'content' => 'You are a helpful assistant.'],
['role' => 'user', 'content' => 'Hello!']
];
$result = $provider->complete($messages);
echo $result['content'];// Stream responses in real-time
foreach ($provider->stream($messages) as $chunk) {
echo $chunk; // Output each chunk as it arrives
}use Datlechin\Ai\Providers\HttpProviderFactory;
// Get embeddings provider
$provider = app(HttpProviderFactory::class)->createEmbeddingsProvider();
// Generate embeddings
$text = "This is some text to embed";
$embedding = $provider->embed($text);
// Returns array of floats (vector representation)
print_r($embedding);use Datlechin\Ai\Providers\HttpProviderFactory;
// Get moderation provider
$provider = app(HttpProviderFactory::class)->createModerationProvider();
// Check content
$result = $provider->moderate("Content to check");
if ($result['flagged']) {
// Handle flagged content
print_r($result['categories']);
}Implement the provider interfaces:
namespace MyExtension\Providers;
use Datlechin\Ai\Providers\Contracts\LlmProviderInterface;
class CustomLlmProvider implements LlmProviderInterface
{
public function complete(array $messages, array $options = []): array
{
// Your implementation
return [
'content' => 'Generated text',
'usage' => ['tokens' => 100]
];
}
public function stream(array $messages, array $options = []): \Generator
{
// Yield chunks
yield "chunk1";
yield "chunk2";
}
public function getName(): string
{
return 'custom';
}
public function getModel(): string
{
return 'custom-model';
}
}Register in extend.php:
use Datlechin\Ai\Providers\ProviderCatalog;
return [
(new Extend\ServiceProvider())
->register(function ($container) {
$catalog = $container->make(ProviderCatalog::class);
$catalog->register('custom', MyCustomProvider::class);
}),
];- Models: GPT-4, GPT-4 Turbo, GPT-3.5 Turbo
- Supports: Text generation, embeddings, moderation
- Streaming: ✅
- Models: Gemini Pro, Gemini Flash
- Supports: Text generation, embeddings
- Streaming: ✅
- Models: Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Opus
- Supports: Text generation
- Streaming: ✅
Listen to AI events in your extensions:
use Datlechin\Ai\Events\TextGenerated;
use Illuminate\Contracts\Events\Dispatcher;
return [
(new Extend\Event())
->listen(TextGenerated::class, function (TextGenerated $event) {
// $event->content
// $event->provider
// $event->model
}),
];Available events:
TextGenerationStartedTextGeneratedEmbeddingsStartedEmbeddingsGeneratedModerationStartedModerationCompletedProviderInitializedProviderFailed
POST /api/ai/generate
Content-Type: application/json
{
"messages": [
{"role": "system", "content": "You are helpful"},
{"role": "user", "content": "Hello"}
],
"stream": true
}POST /api/ai/embeddings
Content-Type: application/json
{
"text": "Text to embed"
}POST /api/ai/moderate
Content-Type: application/json
{
"content": "Content to check"
}$provider = app(HttpProviderFactory::class)->createLlmProvider();
$messages = [
['role' => 'system', 'content' => 'Summarize the following text concisely.'],
['role' => 'user', 'content' => $longText]
];
$summary = $provider->complete($messages);$embeddingsProvider = app(HttpProviderFactory::class)->createEmbeddingsProvider();
// Embed query
$queryVector = $embeddingsProvider->embed($searchQuery);
// Search in database (using vector similarity)
$results = DB::table('ai_embeddings')
->selectRaw('*, vector_distance(embedding, ?) as distance', [$queryVector])
->orderBy('distance')
->limit(10)
->get();$moderationProvider = app(HttpProviderFactory::class)->createModerationProvider();
$result = $moderationProvider->moderate($userContent);
if ($result['flagged']) {
// Auto-hide or flag for review
$post->hide();
}Settings available in admin panel:
datlechin-ai.provider- Selected provider (openai, gemini)datlechin-ai.api_key- API key for providerdatlechin-ai.models.selected.text- Text generation modeldatlechin-ai.models.selected.embeddings- Embeddings modeldatlechin-ai.models.selected.moderation- Moderation model
Access in code:
$provider = $settings->get('datlechin-ai.provider');
$model = $settings->get('datlechin-ai.models.selected.text');- Flarum 1.2+
- PHP 8.1+
- Composer 2.0+
- Valid API key for chosen provider
If you find this extension helpful, you can support ongoing development through GitHub Sponsors.