Minimum clinical utility standards for wearable seizure detectors: A simulation study

Abstract

Objective

Epilepsy management employs self-reported seizure diaries, despite evidence of seizure underreporting. Wearable and implantable seizure detection devices are now becoming more widely available. There are no clear guidelines about what levels of accuracy are sufficient. This study aimed to simulate clinical use cases and identify the necessary level of accuracy for each.

Methods

Using a realistic seizure simulator (CHOCOLATES), a ground truth was produced, which was then sampled to generate signals from simulated seizure detectors of various capabilities. Five use cases were evaluated: (1) randomized clinical trials (RCTs), (2) medication adjustment in clinic, (3) injury prevention, (4) sudden unexpected death in epilepsy (SUDEP) prevention, and (5) treatment of seizure clusters. We considered sensitivity (0%–100%), false alarm rate (FAR; 0–2/day), and device type (external wearable vs. implant) in each scenario.

Results

The RCT case was efficient for a wide range of wearable parameters, though implantable devices were preferred. Lower accuracy wearables resulted in subtle changes in the distribution of patients enrolled in RCTs, and therefore higher sensitivity and lower FAR values were preferred. In the clinic case, a wide range of sensitivity, FAR, and device type yielded similar results. For injury prevention, SUDEP prevention, and seizure cluster treatment, each scenario required high sensitivity and yet was minimally influenced by FAR.

Significance

The choice of use case is paramount in determining acceptable accuracy levels for a wearable seizure detection device. We offer simulation results for determining and verifying utility for specific use case and specific wearable parameters.

0