diglet wrote: ↑Sun Jan 12, 2025 6:02 pm
Yes. I used -5 to -6 with my stereostim rig, and -1.3 with FOC-Stim. I believe this is mostly caused by the 3-transformer configuration.
And also with FOC-stim, the 3 outputs are are calibrated to similar values, right? I've been sick for a week so I still haven't had a chance to test it, but I hope I'll get some alone time next week so I can :)
My motivation for trying to perform a safety analysis of the device is that if the hardware at it's max capacity could deliver a signal comparable with the strength of a taser, then it has the potential to be dangerous. Luckily I've never been tasered myself, but there are clips on youtube where the effects of it can be studied. It seems to me that the victim gets paralyzed with pain, which renders them unable to just pull out the taser electrodes - and these people aren't even being hit in the genitals :D So it would seem to me that if a FOC-stim device for some reason went to full signal strength, a solo stimmer might not be able to neither turn it off nor pull off the electrodes to escape, even if they aren't using self-bondage.
The different failure scenarios I've looked at:
FOC-stim code/firmware. I haven't been able to verify every single line, but so far I haven't found any serious bugs. It has multiple safety mechanisms such as a watchdog timer, input voltage protection and a current limiter, and I have seen the first two in action so at least they seem to work. I think any bug which interrupts program execution would in worst case lead to a constant current being emitted, which would not be transferred to the user thanks to the transformers. I think it's extremely unlikely that the signal generation algorithms would suddenly fail while given normal inputs, so I tried to focus on what would happen during different failure scenarios, and with the current code I was unable to trigger any bug using "normal" input or cause any buffer overflow.
Risk mitigation:
A1. Document each function (e.g. doxygen) to describe what it does, valid input parameters, preconditions etc. to make it easier for reviewers to verify that the code works as intended.
A2. Add unit tests. Would make it easy to test a lot of edge cases that might not show up during manual testing, and most importantly it would reduce the risk of accidentally breaking things that have been manually verified previously. If you add the basic framework for tests, so you get it the way you like, then I could help writing test cases.
Communication errors. FOC-stim communicates with the computer using plain serial, and while I can't say for sure, I don't think it uses any parity bits. However, while a single bit flip could change A0999 to C0999, I think the risk of that happening is quite low. Lost packages, for example initial calibration, could also cause the device to emit stronger signals than intended, but since Restim periodically re-sends all settings, and because of the watchdog timer, any error ought to be corrected after a second.
Risk mitigation:
B1. Use a more robust protocol which has checksums, sequence numbers and acks. Would take a bit of work to implement, and could make communication monitoring require more effort, but I don't think it would be too difficult to do. Personally I would use a binary protocol to make deserialization as simple as possible.
B2. Initialize calibration parameters to -10 in firmware (minimum) instead of 0.
B3. If keeping the serial text based protocol, I think it's better to not continue parsing commands in case of errors (unexpected characters or line buffer full). IMO any parse error should ignore data until the next newline.
Restim code. Larger code base but did not find anything severe in the parts that I reviewed. Any bug here has the potential to be dangerous, since sending a high calibration value and/or high volume would not be detected as a malfunction. Especially if the user sets the FOC-stim pot to max, and relies on software for intensity control. I think it's a risk that users can "calibrate" the signal intensity in multiple different ways, i.e. using the potentiometer, the calibration fields, and the global volume in Restim. This makes testing all use cases more convoluted, and if the behaviour of certain features are changed in the future, it might lead to different end results for different users.
Risk mitigation:
C1. Standardize the calibration/usage of FOC-stim/restim. IMO using global volume for calibration should not be an option, and it should be "safe" to use the full range of Vx000 to Vx999.
C2. Limit calibration (C command) range to a "safe" range. If you use -1.3 yourself, allowing up to 10 sounds a bit excessive. If you set the max value to something which ought to be enough for most users, anyone wishing to exceed it could still modify their own firmware to do so.
C3. Unit tests are always good to have :) But I think it's difficult to get good behaviour coverage in a GUI application.
Malicious funscript. Since Restim only transmits translated t-codes I think it's not easy to create a malicious funscript file, which in some way causes Restim/firmware to generate non-safe signals outside of its calibrated range.
Risk mitigation:
D1: Unit tests to really cover all kinds of weird input, and to prevent regression.
Using FOC-stim with some other software. The usage of t-code suggests that the device is intended to be usable with other software too. Not sure if other software written to control t-code based devices will send DPING at regular intervals, which could prevent interoperation. But if possible, a third party program could have all sorts of bugs and/or implementation errors (e.g. not verifying FOC-stim compatible version)
Risk mitigation:
E1. (same as B1) Use proprietary protocol, so that any software controlling the FOC-stim is explicitly designed to do so.
E2. (same as C1/C2) Set firmware limits so that no combination of C and V generate potentially dangerous signals.
User mistake/error. I think this currently poses the biggest risk. For example if the user is calibrating their device, they likely have a signal playing while entering values. Accidentally writing a positive number instead of a negative could lead to a big jump in intensity. I think it's also easy to quickly get to large numbers if holding the mouse button and moving around on the graphic calibration widget. User might not realize the device isn't at max volume, and calibrate it too high to compensate.
Risk mitigation:
F1. (same as C1/C2) Set firmware limits so that no combination of C and V generate potentially dangerous signals.
F2. Add GUI switch to unlock/lock calibration widgets.
F3. Visual warning if enabling calibration while volume is low.
F4. Keep separate calibrations/settings for different devices.
F5. Always ramp changes in calibration values instead of immediately applying them. Probably a good idea to block further change meanwhile, to prevent the user from keep increasing the number if they don't realize it takes a few seconds to reach the number the entered.