Assisterr is a decentralised platform that simplifies the creation, maintenance, and support of community-owned Small Language Models (SLMs).


Our incentive-driven data inference framework allows everyone to contribute knowledge and expertise to train these AI models, ensuring permissionless access, data quality, and fair rewards.

Our mission is to ensure everyone has equal opportunities to contribute to and benefit from AI progress.

Assisterr Versus AI Monopoly

Is AI a global threat? Not exactly—unless it’s in the hands of a few monopolists backed by Big Tech companies or governments. 

Biased black boxes shape our thoughts and actions to boost Big Tech’s profits or support political regimes. This isn’t just a dystopian fantasy—it’s our reality, and we must tackle it.

Another side of the same coin is Data Ownership. 

Big Tech players simply took the knowledge that belongs to all humanity, trained their Large Language Models (LLMs), and then offered us a $20/monthly subscription.

Recently, Google secured a deal to pay $60 million annually to access Reddit’s API with user-generated data.

Yet, there’s little talk about the community of contributors, their share in this $60M, or how they can claim a piece of the value their data creates.

We’re set on changing this with Assisterr by creating an infrastructure that backs Decentralised AI Data inference and a network of community-owned Small Language Models.

Data pipeline > AI models

Since late 2022, we’ve tested AI agents for retail investors, sales teams, and developer-focused organisations. 

Our hands-on work with LLMs for automating specific business tasks showed us the real magic lies not in the LLMs but in setting up a data pipeline filled with custom, frequently refreshed data.

Data inference is a significant bottleneck for domain-specific use cases. 

Individual users and organisations often hesitate to share their data despite it being crucial for any AI-powered solution. 

To tackle this, we’ve built an entire infrastructure to facilitate quick model setup and introduced a framework that motivates data sharing through incentives.

Our initial primary use case focused on automating real-time relationships with developers for developer-facing organisations. 

We trained DevRel AI agents for platforms like Solana, NEAR, Particle Network, and Light Link (an ETH L2), using a wide range of tech documentation and codebases. These AI agents excelled, handling up to 95% of support requests. They slashed waiting times for responses, highlighted gaps in tech documentation, and offered suggestions for enhancements.

So what are we offering, and how does it work?

Our infrastructure lets anyone create a Small Language Model tailored to a specific domain or business function, connect it to a user interface, and invite the community to enhance its dataset.
Each SLM is laser-focused on a specific task and nails it. We’re crafting efficient SLMs that are experts in a particular field of knowledge.

The point is that the community owns the data powering these AI models. Individual contributors feed their small language models (SLMs) with fresh, accurate information, and in the long term, those models will overperform LLMs on each specific task.

To achieve this, we’re focusing on:

  • An AI infrastructure layer underpins community-owned models and their interoperability. Think of it as a Store of SLMs, ready to assist both humans and AI Autonomous Agents with any task;
  • A framework for incentive-driven inference verification.

How do we keep our models relevant and effective with continuous data updates? Here are the key components:

  • Initial dataset
  • Data pre-processing flow
  • Regular flow of up-to-date information

We are forging a new social contract for the AI era, valuing data ownership, expertise, and contributions as the significant assets they truly are.

Our Future Vision

Our vision of the future is simple: a Network of Small Language Models owned by the community and benefiting the community of contributors. To make it happen, we are designing the ecosystem with SLMs, Autonomous Agents, and Contributors and evolving the infrastructure to support economic and data interoperability, model storage, and reward distribution. 

Community-owned SLMs will compete with Google and LLMs in performing specific tasks (like launching apps or dApps, video editing, article writing, marketing campaigns, design, etc.) for humans or Autonomous Agents.

They’ll charge tokens for usage, fairly shared between model contributors and Assisterr.

It’s like an AI-driven Upwork or Fiverr platform, where agents need a crypto wallet to pay these contractors (SLMs) for every task they finish.

Final word

What sets us apart? 

We’re set on proving SLMs can surpass LLMs, particularly in precision tasks requiring top-notch skills. 

Committed to a future where AI is for all, we see the vast potential in cryptocurrency. At Assisterr, we aim for everyone to benefit from AI’s growth. 

Do you share our vision for the future? Join our team and help develop a decentralised AI. 

Categories: