Skip to main content

Do you have any tips on AI Best Practices?

How to get the most out of CisionOne’s AI features

Jonathan Gould avatar
Written by Jonathan Gould
Updated over 2 months ago

Our AI assistant is designed to guide you through common workflows using smart, structured templates. Whether you're building a Boolean Query, writing a social post, or refining your Media Outreach, this guide will show you how to make the most of each interaction and ensure you aren’t met with unexpected results.


1. Choose your Prompt

When you click on a prompt (e.g. "Create a brand-focused Boolean Query" or "Write my Social Post"), the assistant will respond with a question to help gather the right details.

Example:

Prompt clicked: "Write my Social Post"

Assistant: “Sure! What’s the post about, and which platform are you targeting?”


2. Respond Clearly to the Assistant’s First Message

The assistant is waiting on your input and will only provide results as good as the information you provide it. Try to be specific when replying so it can generate the most relevant result.

✅ Good Response:

“It’s a post for Instagram about our new ‘Hailey Bieber x Rhode’ skincare drop. Target audience is Gen Z. This skincare is line is made with all natural ingredients that and is formulated to be hydrating and target all signs of aging.”

❌ Bad Response:

“Something for Hailey Bieber.”


3. Build Step-by-Step

You don’t need to get everything perfect up front. The assistant is built to be conversational and collaborative. After the first suggestion, you can:

  • Ask for rewrites

  • Request tone changes

  • Add or remove details

  • Break things into sections

Tip: Use follow-ups like “Make it shorter,” “Add a call to action,” or “Group by campaign.”


4. Stay Within Task Scope

Stay within the supported task areas for the best results. For example, when you are creating a Boolean Query, stay within that supported task for that conversation. If you want to change to write a Social Post, restart the assistant and use the Social Post creation template button.

The supported Tasks are:

  • Monitoring: Mention Stream and Boolean support

  • Social: Social post creation, variations and timing

  • Outreach: Outreach creation, adjustments and timing


5. Avoid Unclear or Off-Template Prompts

The assistant is designed around task templates. It won’t perform well with vague or out-of-scope messages.

Avoid:

  • “Write something for Dior”

  • “I need help” without context

  • Asking for live news or links


Best Practices at a Glance

Task Type

After Clicking Template…

Good User Response Example

Boolean

“What brand or topic should I focus on?”

“Dior, with terms around perfume, spokespeople like Jisoo and Depp.”

Social

“What’s the post about and which platform?”

“A TikTok post announcing our Billie Eilish campaign for eco fashion.”

Outreach

“What’s the pitch topic and goal?”

“A professional media pitch about Qantas’ new sustainability initiative.”


Disclaimer

Please note that the responses generated by the AI model, powered by large language models (LLMs), are based on probabilistic predictions and publicly available data. While we strive for accuracy, the AI may produce incomplete, outdated, incorrect, or potentially unsuitable results in certain contexts.

Any reliance on its outputs is at the user’s discretion, and we recommend verifying critical information. We do not assume liability for any results provided by the AI or any decisions, actions, or outcomes resulting from AI-generated responses.


Reminders

  • The assistant does not pull live data or news

  • It won’t reference help articles

  • Each task should follow its own structured flow, stick to the conversation thread started by the button.

FAQ

  • What AI technology powers the CisionOne Assistant?

    The feature is powered by Google's Gemini 1.5 pro model and has been specifically optimised for CisionOne Tasks.

  • How does the AI handle different languages?

    The assistant can understand and respond in multiple languages. However, it may occasionally default to English instead of the intended language.

  • Are my chats and feedback data stored?

    Yes. This includes your user ID, input, output, feedback response (thumbs up/down with reason) and search date. We use this data solely to observe and optimise the model's performance. This data is archived after a certain period.

  • What happens if there is an error?

    If an error occurs during the AI response process, you'll see an error message with details that something went wrong. Simply try your search again, and if the error persists, contact Technical Support and submit a ticket via Jira for assistance.

  • Are the AI results tailored to individual clients or trained on client-specific data?

    No, the AI model is not client specific and does not provide results tailored to individual clients. A single model is used for all clients, and any feedback provided helps improve the overall model for everyone. This ensures consistent, unbiased performance across all clients.

  • How does the feedback get processed to update the AI model?

    All feedback is sent to the product team for review. The team uses this input to assess performance and make any necessary adjustments to improve the model over time.

  • How often is the AI model updated or adjusted?

    The AI model is regularly reviewed based on feedback submitted in the platform to ensure results remain accurate and relevant. There is no set cadence for these reviews but updates will be made as necessary to maintain performance.

  • How does the AI model handle sensitive or inappropriate content?

    Google’s Gemini Vertex AI uses four configurable safety categories — Hate Speech, Harassment, Sexually Explicit Content, and Dangerous Content — to evaluate whether an input is classified as Safe or Unsafe. We have set the model to the highest safety levels for most categories, meaning that most content falling into these categories will be flagged as sensitive and show the appropriate error message. Additionally, we’ve implemented safeguards within the prompt to prevent the AI from generating responses that include sensitive content.

Did this answer your question?