Skip to main content

AI Questionnaire Assistance (AIQA) - FAQs

Find commonly asked questions regarding AIQA

Ashley Hyman avatar
Written by Ashley Hyman
Updated over 9 months ago

I can’t see the questionnaires table, I see an onboarding screen instead.

  • You will see the above screen when your “datastore” is not ready to be used by AIQA. A datastore is just a special database that AIQA uses to answer questions.

  • To get past this screen you should:

    1. Make sure that you have some content in your Trust Library (documents, KB entries, TC Card Item descriptions, or TC Updates).

    2. Make a small edit to something like a KB entry to kickoff the datastore update process.

    3. Wait while the datastore is created if they have just added something or edited something in their Trust Library

    4. Contact your SafeBase Customer Success representative or reach out to support@safebase.io for help

I can’t click “Start” on any of my questionnaires. The button is disabled.

This is most likely because your organization has not enabled generative AI features. Go to your organization settings, scroll down to the generative AI section, and make sure the toggle is on.

My questionnaire seems to be stuck in a processing state. What should I do?

  • Questionnaires can take different amounts of time to process based on two factors:

    1. the size of the questionnaire you are running

    2. the number of other AIQA customers who are running questionnaires at the same time

  • An approximate rule of thumb under normal operating conditions:

    • Small questionnaire (< 100 questions) - finishes processing in 3-5 minutes

    • Medium questionnaire (100 - 300 questions) - finishes processing in 5-10 minutes

    • Large questionnaires (300+ questions) - 10 minutes+

  • If it has been more than 30 minutes and the questionnaire has not finished processing, reach out to SafeBase to investigate

AIQA didn’t follow the answer options (e.g. Yes, No, N/A) and put free form text instead

Whether AIQA responses can adhere to specific answer options depends on the type of questionnaire:

  • Sheet-style files - the answer option must be encoded as a dropdown in the original excel file and that dropdown must be in the same column that you have selected as the “Answer” column when you upload the file. Otherwise, AIQA will treat the questions as free text questions.

  • Text-style files - answer options are not supported in text-style files like .docx and .pdf

  • Portals - If a question is formatted a single or multi-select dropdown in a portal, it should be ingested as such in AIQA. This is only true if the Chrome Extension is used to import the questionnaire. Some portals allow you to export the questionnaire as an excel file. These exports often do not retain the question type in the export and therefore will not be considered if the exported file is uploaded into AIQA

I uploaded an RFP, but AIQA didn’t provide many answers to the questions

  • AIQA is optimized to answer security-related questions and we cannot guarantee that it will perform well on general RFP questions.

  • Some users include non-security information on their Trust Center and Knowledge Base which gives AIQA a chance at answering RFPs. Even when this content has been uploaded to SafeBase, results for general RFP questions are not always reliable.

I see one of the answer sources is an “Analyzed Fact”, what is this?

An analyzed fact is basic company information that SafeBase creates to answer certain questions reliably. For example, we create analyzed facts for things like the company name, and the list of certifications that the company has. An analyzed fact is not a specific document that you upload yourself, they are automatically generated.

Are there confidence levels with AI generated answers?

  • There is not a confidence score associated with AI generated answers.

  • To ensure that AI generated answers are reliable, SafeBase does two things:

    1. We have a robust post-processing pipeline that takes the output from an LLM and shapes it into a suitable response for a security questionnaire. Some of these steps include things like making sure that the answer is grounded in the facts provided, making sure that the answer obeys the answer options for single-select and multi-select question, making sure that the generated answer is not overly verbose, etc.

    2. We have our model explicitly return no answer when it cannot confidently find relevant supporting materials. LLM-based pipelines tend to force a response based on loosely-related information. AIQA has tight guardrails in place to ensure this does not happen.

Why do I have to select a single product when running a questionnaire? (Relevant for multi-product orgs)

AIQA requires that a single product is selected before running because we find that organizations have different answers to the same question across their different products. If we were to combine all of the source material across products into one dataset, it would confuse the model as it generates answers because it would find conflicting and different material to support the same question.

I am getting an error in the Chrome Extension, what should I do?

  • Try importing the questionnaire using the Chrome Extension again. Sometimes these errors are temporary. If you continue to experience issues, reach out to the SafeBase team to trouble shoot.

  • Providing the following information is very helpful to debug:

    • The portal name

    • The portal URL

    • Credentials to access the portal (if you are able to provide those to us)

    • A description of what the user was trying to do when they experienced this error (e.g. I was trying to upload a questionnaire from Coupa into SafeBase, I was trying to export the answers to a questionnaire from SafeBase into ProcessUnity, etc.)

Did this answer your question?