As usually happens, any connected sex toy security news gets sent to me pretty quickly, saving me a bunch of time and effort. Thank you lazyweb!
A friend sent me a link to a blog post on Bananamafia.dev about vulnerabilities found in the Satisfyer line of devices and associated apps and API's.
I have to say, this blog post made me happy. Not because it's someone else looking at this area of device, or that they found some good vulnerabilities. What impressed me was that they did it right. It was professional, respectful, and they went through the vulnerability disclosure process (as best they could) properly. Huge kudos to them, and to Satisfyer for responding to them and accepting the report.
This report really helps solidify the threats that a poorly secured connected sex toy can cause and proves some of the dangers I feared are really possible and no longer theoretical.
That said, the report is very technical and does not reflect some of the nuances of 'real world' risk they pose and ignores a lot of context. While it may be a legit vulnerability, it's not always feasible or likely.
The risks highlighted in the post for bluetooth communication revolve around the commands sent over bluetooth from the app to the device and how they can be manipulated. In this case, the highest value the app would send to the device for vibration level was '66'. They found that they could manipulate that and send a value of up to '100'.
I've inquired with the writer about the effect of this and if it did indeed make the vibration more intense on the device at '100' than '66' or if the motor was maxed out at '66' but the device accepted higher values. I will update this if I get a response.
Regardless of if the vibration did get physically more intense or not, this confirms one concern I've had and spoken about before; mechanical vs. software limiters. I always imagined that a company would build a device with an over-spec'd vibration motor and the maximum speed it was allowed to run at was lower than it was capable of. Possibly for cost, but there could also be reasons of longevity and to make it last longer, or wanting more torque to spin a larger mass (deeper vibration). Regardless of the reasons, how you implement this matters.
If the maximum a motor is rated for is, say 100 rpm (purely as an example) and through product testing, the most a person can comfortably handle is 66 rpm, you now want the "MAX" setting in your control (physical buttons or app) to be 66 rpm, even though the motor can handle more. There are many ways to do this. One is mechanical or electrical, such as a resistor limiting the amount of voltage allowed to reach the motor when the setting is "MAX" and lower speeds are reduced in a linear fashion (i.e 50% power would be ~33 rpm). The other way to do it is in software, where you set limits on what values it can send to the device controlling the motor. So the software could say "MAX=66", even though it's capable of driving it at 100 rpm or beyond.
Where this vulnerability gets concerning is that, depending on the device, that limit on speed may exist because the motor, at its full speed capabilities, draws more current from the battery than the device can provide safely. Most of these device use Lithium Ion batteries, which don't deal well with excessive current draw. This can cause heat and the battery to swell which may further the damage. This is bad because as Samsung showed a few years ago, do not take well to heat and damage and can explode. Considering where these devices are commonly used, a lithium battery fire would be a horrifying thing to have happen.
There are ways to build in protection circuits to prevent such over draw but keep in mind that the batteries on these devices are fairly small and the circuitry is also very small. There may not be room to put in overload protections commonly seen on larger devices. Or the designer didn't think this was necessary since the software would never send more than a "66" as the speed.
While this attack could lead to a catastrophic battery failure, it requires fairly low level access to interact with it. A custom app that sends larger than normal values to a device is certainly possible and some people wanting an extra thrill may do this at their own risk, but manipulating this somehow via the vendors mobile app, particularly if the app is loaded from a trusted app store (as opposed to side loading and bypassing some security checks), the changes are very remote. However there is always the potential a method could be found in another device in the future and so if manufacturers and designers don't consider hard mechanical/electrical limits on their devices, a literal "Fire in the loins" may be an unfortunate headline some day.
This finding is interesting because it involves the interactions of the app with the vendors back end API (application programming interface) that relates to logging in as a user and the "identity" you are when using their system to remote control a device.
Skipping over the details, an attacker could fairly easily (on a technical level) login to the satisfyer app as anyone due to the way they implemented the authentication scheme. They made a mistake of trusting the user (actually the app) to do some of the authentication process on their device, rather than on the company's server. A situation that is usually not a good idea.
The interesting part is that the attack figures out the secret "token" used to actually log in, and doesn't require the password. So when an attacker successfully hijacks an account, they aren't changing the password (which would be an indication the account was compromised), and even if the user did change the password, the token is unaffected, so the account owner has no way to lock the attacker out.
This attack allows for a seamless takeover of the victim account. This includes access to friends list and the ability to message, call and remote control their friends devices, all while appearing to be the account owner.
This is truly scary, especially since the takeover is seamless and doesn't immediately alert the owner that something is wrong. This opens up a huge potential for situations that, by my non-lawyer assesment, would count as sexual assault.
Imagine the feeling of finding out that the intimate acts you thought were being controlled by someone you gave permission to was actually a stranger. Regardless of the fact that you didn't know at the time or that it was at a distance, the violation and trauma can be no less significant than those of a physical sexual assault. A situation this project hopes to never see happen.
"Rape By Deception" is a criminal act in certain juristictions but most places do not consider this kind of "at a distance" situation in their sexual assault laws and so it opens up a huge problem for courts should this occur. Account hijacks are not physical identity impersonation, but almost an emotional one. One that allows the attacker to prey on an established trust.
This is the most concerning issue since it involves something that can be done from anywhere and is not very difficult. Fortunately it seems that this issue has been patched, however one has to wonder what similar issues may exist in other applications for connected intimate devices.
WebRTC via coturn
This issue is a configuration issue for the most part. Because the company was using the same (admin) username and password to connect to the server that allows video and audio calls to be relayed between users (necessary for a bunch of technical networking reasons), there exists the potential to use these credentials to see who else is connected to the server. The calls themselves would still be encrypted end-to-end between users, an attacker could glean information about who is connected to the server (who is connected to whom, IP addresses, etc) . While not catastrophic, it is likely not something that the company wants to have accessible. There would be an update to the app required and some changes to the back end to allow these connections to be done in a more secure fashion, which means it takes time and money. Satisfyer has indicated they plan to fix this in the near future according to the blog post.
Software Updates and DFU Mode
While technically interesting, much like the "ButtHax" talk a few years ago at Defcon, a whole lot of things would have to happen to allow this to be exploited and a non-trivial amount of work would be needed to do something nefarious.
In short, there is enough information in the app to put the device in upgrade mode. This is a feature of the software and a necessary function to do things like add features or fix security bugs. The problem is, if an attacker could write their own firmware and somehow get the device into update mode and send the new firmware, they could get very low level access to the device and have basically total control of it.
If Satisfyer is signing their update files, and the app is checking them, this is a huge hurdle for an attacker to overcome and one also has to ask, to what end? Someone could upload a broken firmware and "brick" a device, rendering its "smart" features unusable, but for anyone to put even that much effort in, there would need to be some motivation, and that is usually a financial one.
How someone could do this in a widespread way that would have a certain enough financial reward to justify the work and legal risk is something I have trouble figuring out. There may be a case for some extortion attempt against the company (pay up or we break all your users devices) or a competitor wanting to hurt the vendor and gain market share. The possibility exists for someone to just want to cause chaos as well. However all these scenarios are unlikely and so the risk posed by this issue is very low in my estimation. As said, updates should be signed and checked, and other security precautions be taken to ensure this can't be abused, but I don't saee widespread abuse being an issue.
To wrap up, I'm so glad to have been alerted to this blog post. Makes me happy to see others taking security of sex toys and intimate deviecs seriously and respectfully as I have. Also glad to see others work on it because I can't do everything.
Satisfyer users shouldn't be completely freaked out at this. The issues were reported to the company and the high risk ones have been fixed, so while there was an issue, it no longer exists and nothing indicates it had been abused maliciously before being discovered by the blog post author.
As for Satisfyer, I hope they learn that engaging with outside researchers is a worthwhile activity that can make their products better and that they formalize a vulnerability disclosure program in the future. If they need it, I'm here to help!