eDiscovery Tips & Tricks


Archiving Datasets

Did you know that an archiving feature was added in Brainspace 6.2? This handy feature allows you to isolate and archive datasets, freeing up space and ultimately reducing your active document count.

Considering that Brainspace licenses are allocated and priced according to an active document threshold, archiving can reduce purchases of additional/unnecessary licenses while eliminating bottlenecks in your workflow. Furthermore, if you choose to archive a dataset, you can re-import it later and pick up where you left off with no additional cost, bottleneck, or hassle!

Relativity Plus Connector

The Relativity Plus Connector feature was also added in Brainspace 6.2. In order to utilize it, your platform must run on Relativity 9.7 or higher.

This connector leverages Relativity API, which not only expedites Brainspace builds, but also has the added benefit of reinforced security enhancements. The heightened security is most notable in relation to:

  • OAuth Authentication to Relativity
  • Relativity Credentials are no longer stored in Brainspace
  • Direct SQL Connectivity is no longer needed

Simultaneous Build Servers

When you need to review numerous Brainspace datasets, you should consider the addition of a supplemental build server to your Brainspace environment. The added resource(s) will allow your team to run two builds simultaneously, which will increase the rate of review and analysis (i.e., performance enhancements).    

Multifactor Authentication

Brainspace supports SAML 2.0, which is great news for your InfoSec team. You can now seamlessly link your favorite SAML provider such as Okta, DUO, Ping, or even Azure, and further secure Brainspace by using your provider’s security policies on application access. For example, when Brainspace is tied to Okta, it prompts users to use Multifactor Authentication via a push notification to a secure device in order to gain access. More security, seamless integration!



In-house, IT teams and end users do not possess a fundamental knowledge of Nuix infrastructure design best practices, which inevitably leads to end users complaining about Nuix underperformance.

Properly configure the requisite drives in order to maximize throughput i.e. local vs. network and/or RAID configurations.

Provision the appropriate amount of memory needed in relation to various filetypes. This is especially crucial for accommodating containerized items, such as, PSTs, NSFs, and E01.

Properly assign the amount of CPU cores available relative to how many workers will be used for a processing, OCR, and/or export job.

End users must understand the relationship between memory configuration and Nuix in order to maximize performance and/or to keep the application from crashing.

Optimize item counters in order to minimize the load on the user interface.

Many end users do not understand the full depth of Nuix’s capabilities and workflow. Moreover, users do not leverage the full-suite of the application’s features – allowing supplemental, sometimes less efficient eDiscovery applications to bare larger loads than necessary.

Search and tag. The ability to tag items from a list of search terms can be used instead of an STR in Relativity.

Leverage item sets. Item sets are used to create a group of documents that have been deduplicated at the family level as part of a workflow to promote items for review.

Employ the NUIX OCR feature if your system has version 8.6.

Conduct email threading. This workflow involves performing thread analysis on Nuix and results in the freeing up of resources for Relativity.

Use scripting and automation. With a feature-rich user interface, basic tasks can and should be automated via scripts in order to minimize the need to monitor processing and other parts of the workflow. This also has the added benefit of repeatability and greater defensibility.



Non-optimized Searches

When leveraging the Relativity application, it is commonplace and even critical for end users to create multi-layered, nested searches. However, problems arise when end users fail to set the appropriate controls, which yield delayed outputs and the consumption of valuable resources on the SQL Server – all of which, undermine the performance of the application performance.

The proverbial ‘smoking gun’ for lackluster Relativity performance is tied to the “is like” or “is null” control leveraged on extracted text fields, which requires a full table scan and ultimately, creates a bottleneck effect as all other corresponding searches have to wait for that set to return.

Query Hints

The utilization of Query Hints, especially when improperly applied to multi-layered, nested searches, presents additional performance issues for the end user. Query Hints can cause issues with SQL Server’s cardinality estimates, which results in additional delays as the application must essentially wait to secure available resources narrowly defined by the Query Hint’s plan. By default, the SQL Server should be allowed to choose the proper plan for query optimization.

In order to navigate potential performance issues related to Query Hints, end users should first consult Relativity’s Support Team or an experienced Data Base Administrator (DBA). 

dtSearch Index Management

As cases grow in size and complexity, the role of Index Management becomes increasingly important. First and foremost, end users must temper and/or adjust expectations when applying dtSearch to all documents as this process often takes multiple days to build/construct.

Moreover, it is imperative that end users /organizations employ a rational, documented search strategy, to yield a timelier output for cases with a high volume of data. A potential strategy to expedite the process in an orderly manner is to create multiple dtSearches based off extracted, OCR text size and/or required search fields.


Talk to the Experts!

If you found this information helpful and would like to tap into George Jon’s wealth of knowledge and experience, please contact us for a consultation. Our Subject Matter Experts (SMEs) are standing by, and we welcome the opportunity to optimize your eDiscovery environment capabilities and performance.


Since the birth of the eDiscovery market over 15-years ago, George Jon’s sole mission has been to architect, deploy, and manage eDiscovery / Forensic solutions – providing the best end-user experience, agnostic of the application, for our portfolio of blue-chip clients (MNCs, Top AM 200 Law Firms, Service Providers, and ‘Big Four” advisory firms) worldwide. 

Through eDiscovery solution deployments from Toronto to Tokyo (not a hyperbole), George Jon’s expert team of infrastructure and application engineers have literally seen it all with regards to application performance issues. In our experience, there are usually two main culprits that hinder application performance in an eDiscovery environment:

  • Human error
  • Knowledge gaps associated with the appropriate resource provisioning of applications and basic adherence or understanding of an application workflow