top of page

MariTalk is an LLM-based chatbot trained to meet the needs of Brazil.

Use it for free at chat.maritaca.ai

Our Products

Our Portuguese-specialized LLMs are available through two products:
MariTalk API: models that run on our cloud
MariTalk Local: models that run on your machine

MariTalk API

The MariTalk API allows you to use our Sabiá-2 models by paying a fixed amount per use, measured by the number of tokens sent (prompt) and generated.

Due to the specialized training of the Sabiá-2 models, they deliver higher quality at a lower price than our competitors.​

 

See below the quality of our models, measured by performance in 64 Brazilian Exams (Enem, Enade, Revalida, OAB, UNICAMP, USP, etc) vs. price:

Custo Beneficio Ingles v2.png

* Considering US$1 = R$5 for OpenAI's models.

Estimate that for every 1 million tokens, 500,000 are input tokens and the other 500,000 are output tokens.

One million tokens are approximately equal to 700 pages of text in Portuguese.

For more details, check out Sabiá-2's blog post:

Models

Pricing

Sabiá-2 Small

US$

0.20

Input

0.60

Output

Per 1M tokens

US$4 of initial credits

8192 context tokens

Low Latency

Best cost-benefit

Rate Limit:

150k input tokens/min

50k output tokens/min

Sabiá-2 Medium

US$

1.00

Input

3.00

Output

Per 1M tokens

US$4 of initial credits

8192 context tokens

Our highest-quality model yet

Rate Limit:

150k input tokens/min

50k output tokens/min

How to use

Use the Maritalk API through our Python library:

import maritalk

model = maritalk.MariTalk(key="insert your API key. Ex: '100088...'")

answer = model.generate("Quanto é 25 + 27?")

print(f"Answer: {answer}")    # Should print something like "52."

maritalk local (1).png

MariTalk Local

In addition to using our models via API, you can also host them locally.

Your data never leaves your local machine; only the token usage is sent to our servers for billing purposes.

Sabiá-2 Small

US$

0.70

Per hour

30 days free

8192 context tokens

Low Latency

Best cost-benefit

Minimum of 24 GB RAM GPU

Sabiá-2 Medium

US$

2.00

Per hour

30 days free

8192 tokens de contexto

Our highest-quality model yet

Minimum of 80 GB RAM GPU

How to use

Use MariTalk Local through our Python library:

import maritalk

# Creating an instance of the MariTalkLocal client

client = maritalk.MariTalkLocal()

# Starting the server with a specified license key.

# The executable will be downloaded in ~/bin/maritalk

client.start_server(client="00000-00000-00000-00000")

# Generating a response to the question

response client.generate("Quanto é 25 + 27?")

print(response["output"])    # Should print something like "52."

bottom of page