Best Practices

Best Practices

Tips and recommendations for building reliable, efficient integrations with Nextcraftai.

Use System Messages

Leverage system messages to provide context and instructions to the AI model for more consistent results. System messages help guide the model's behavior and tone.

Tip: Combine multiple system messages for complex instructions.

Implement Retry Logic

Handle transient errors with exponential backoff. Retry failed requests up to 3 times with increasing delays to handle network issues and rate limits gracefully.

Tip: Always respect the Retry-After header for rate limit errors.

Monitor Token Usage

Track your token consumption using the /uses endpoint to stay within plan limits and optimize costs. Set up alerts for approaching limits.

Tip: Check usage regularly to avoid unexpected billing.

Cache Responses When Possible

For repeated queries with the same input, cache responses to reduce API calls and improve response times. This is especially useful for static content.

Tip: Use appropriate cache TTLs based on your use case.

Handle Errors Gracefully

Always implement proper error handling for network failures, rate limits, and API errors. Provide meaningful error messages to users.

Tip: Log errors for debugging but don't expose sensitive details to users.

Use Appropriate Models

Choose the right model for your use case. Use faster models for simple tasks and more powerful models only when needed to optimize costs.

Tip: Start with gemini-2.5-flash for most use cases.