Welcome to our FAQ page focused on answering the technical, security, capability, and other questions we’re often asked about our AI technology.
We’ve integrated generative AI and other smart technologies into our solutions—accelerating and improving case assessment, analysis, strategy, and preparation workflows. Throughout the case lifeycle, our connected, controlled AI keeps everyone on a legal team engaged in the highest-value work so they can deliver superior case outcomes more efficiently and effectively by:
With the evolving nature of AI technologies, we may need to adjust the answers to the following questions.
Please check back here for the most up-to-date information.
We make generative AI (GenAI) available for clients to use within the product via access to large language models (LLMs). This is mostly used to process documents with predefined prompts.
Yes, we use AWS Bedrock: Amazon Bedrock.
These are owned by the customer. Outputs provided to the customer and its users may be similar or identical to Outputs independently provided to our other customers and their users. Outputs and responses that are requested by and generated for our other customers are not owned by the customer.
Yes. Any personal data or PII submitted to the AI features is considered “Customer Data” as defined in the Data Protection Addendum within the Opus 2 Terms and Conditions, and the protections and restrictions on use and transfer in the DPA apply that personal data and PII.
To help ensure usage and system performance and availability there are limitations that apply. These are as specified in our Terms and Conditions. Please request a copy of our Terms and Conditions for specific limits and considerations.
The customer and its users solely decide what data is submitted to the AI technology. No data is processed by the AI technologies unless the user specifically submits it.
We can switch to newer models with simple configuration changes. We are continually developing the product to make the most out of enhanced capabilities that are added to models.
Yes, we use AWS Bedrock hosted within specific regional clusters. AWS Bedrock provides access to numerous large language models. We have engineered the product to be model agnostic and can use different models as these evolve. AWS Bedrock is hosted within specific region clusters. AWS Bedrock is used with base models, currently using the model Anthropic Claude 3 Haiku (Version:Version: 2024-03-07).
We are LLM-agnostic, meaning that we can swap different LLM models as these evolve over time, with careful consideration and thorough testing. We can also make use of one model for one type of analysis and another type for another.
We do not train the models—AWS Bedrock models are pretrained. As better models become available, we have the ability to switch to them through configuration. In the future, we plan to augment the prompts with contextual data to provide personalised responses to each case, based on the case itself. This removes the requirement to perform prompt engineering at the client level.
No, firm data is not used to train the models.
There are known to be inherent biases in LLMs. Some of the concerns do not apply to our application of AI, because we are using LLMs only for their language skills operating on content users provide, rather than their general knowledge. We also provide the context behind why the AI has considered something important so that lawyers are always kept in control, determine whether it’s relevant, and can spot any issues.
We can switch to newer models with simple configuration changes. We are continually developing the product to make the most out of enhanced capabilities that are added to models.
We run the AI model directly from our own AWS account. Prompts sent to Bedrock are not stored by AWS. Only content the user has specifically selected to be analyzed is sent to the AWS Bedrock service per each request. Content is not shared between projects (cases) for learning or any other purposes.
Data sent to the LLM is not stored and is not used for training models. For semantic queries, each Canvas in the project (case) has its own vector database and data store of extracted content. Queries against a particular Canvas can only run against the databases associated with that Canvas—for example, documents in that project. The AI system processes the data the user chooses to send to it. No data is processed without the user specifically selecting it.
We run the AI model via AWS Bedrock directly from our own AWS account. Data is sent to AWS Bedrock for each query, and the content and response are not stored within AWS Bedrock.
This is covered by the AWS Bedrock security policy.
Results of AI operations are treated the same way as other customer and user content. They are stored within the customer’s Opus 2 workspace until the customer chooses to delete them.
Through an SSL (secure socket layer) connection to Bedrock, AWS ensures that data is encrypted both at rest and in transit.
We conduct regular (every 6 months) security audits and assessments, including secure code reviews and security testing and configuration audits with a CREST-accredited specialist penetration testing company.
Yes, we extract people (characters), facts, events, organizations, and legal topics (concepts) from documents. These extractions can be added into work product such as, for example, chronologies, character lists, organizations, and more. In addition, the analysis can link these concepts together, for example, linking specific people into events and linking this into the sources within documents. Extraction rules are configurable.
Users are in complete control of the use of generated results, with any decision to save or use the results actioned by users. All results provide sources directly from the document so that users can validate the response is accurate and relevant.
How we integrate AI technologies into our software is key. Because we understand lawyers’ workflows and how they interact with and use documents and data, we adhere to five AI principles that guide our development process—ensuring that every enhancement is beneficial and empowers existing working practices:
LLM queries are run against external systems. We adjust our usage within AWS Bedrock to scale up and down as required.
The analysis process on a Canvas is run in the background and pre-generates much of the content, such as summaries of different lengths for each document. These are then immediately available to the viewer of the content.
We are using a fixed model, AWS are responsible for ensuring access to Bedrock. On our side, we make sure to apply the best practices to always enhance the prompt engineering to ensure we get the best results possible.
We do not support fine-tuning on top of LLMs. However, our prompts are designed in a way to enable configuration outside of the prompt itself. For example, providing a list of legal concepts applicable to the project (case) can be performed outside of the model but is integrated into the prompt to provide more accurate results in the context of the case.
We do not train our own models. As better models become available, we can switch to them through configuration. As described above, we can augment the prompts with contextual data to provide personalised responses to each case, based on the case itself. This removes the requirement to perform prompt engineering at the client level.
Our AI capabilities have the ability to analyze documents, extract summaries of different lengths, extract key events, people and organizations, analyze transcripts in various ways to pull together, for example, key points, narrative summaries, inconsistent testimony and sentiment analysis. Opus 2 also has a query capability that enables lawyers to ask specific questions around documents.
Not only does our AI enable users to perform an analysis on documents, it also makes suggestions or recommendations. When doing so, we provide access to the sources used by AI so that there is full transparency and visibility around everything the AI is doing and can be checked for accuracy to ensure that the AI is not hallucinating.
Our integration of AI within Opus 2 solutions is not about replacing what lawyers or paralegals do but enhancing their existing workflows and automating certain labor-intensive tasks to free up time to work on the most valuable aspects of a case.
No, our AI features provide an output but the user determines its applicability and appropriateness of use for the user’s intended purpose. Users are in complete control of generated results, with any decision to save or use the results actioned by users. All results provide sources directly from the document so that users can validate the response is accurate and relevant.
Only if the user chooses to save them in the project.
The AI system processes the data that the user chooses to send to it. No data is processed without the user specifically selecting it. We are using LLMs only for their language skills operating on content we provide, rather than their general knowledge. We also provide the context behind why the AI has considered something important so that lawyers are always kept in control and can spot any issues.
Opus 2 enhances the delivery of our own legal services offerings using artificial intelligence. This enables Opus 2 to deliver our services more efficiently and effectively but also improves our potential around service levels provided to clients.
Permissions are set specifically for each case project and its content. This means that access can be customised depending on the project or specific content within the project. This is managed through role-based access, where users are assigned, each with distinct capabilities. This structure ensures that control over editing and managing content is well-defined, offering both flexibility and security in managing access.
Yes, this is possible and can be used in either pre- or postprocessing. For example, we are looking to enable users to input designation topics as part of a prompt to pull out designations on transcripts around a specific topic. We also have the ability to identify legal topics within documents. We are looking to implement the ability to take a predefined list of legal topics around a specific case, and use this as a customized input, so for example, finding documents and sources within documents that are around a specific legal topic that is important to the case.
AI hallucinations—or when a large language model (LLM) perceives patterns or objects that are nonexistent, creating nonsensical or inaccurate outputs—are an issue we’re solving using a white-box AI concept. White box describes a system where the algorithms, logic, and decision-making process of the “box” are transparent and comprehensible. Imagine it as a transparent container with visible internal components.
Because of this transparency, users can understand how the AI makes decisions and comes to conclusions, which gives them insight into its workings. On the other hand, black-box AI conceals its decision-making process, akin to a locked box; although it generates predictions or results, it withholds the methodology behind its calculation.
Since a white box approach enables developers, users, or regulators to examine, validate, and even alter the AI’s behavior for accuracy, fairness, and ethical considerations, AI is valuable because it fosters trust and accountability. It allows for a better understanding and control over AI’s operation, much like having a user manual.