PDA

View Full Version : A security problem: How to verify a calling application's integrity?


Banana
2008-05-23, 09:24
I'm hoping someone can direct me to a good source of information about verifying whether the calling application has not been tampered with. I thought of checksumming it, but the problem is that I have no guarantee that the checksum being passed is actually the checksum of the calling application. I suspect this may not be a solvable problem, but google search gets snowballed with results about verifying that downloaded file was OK, and I'm not aware of other way to do this except for digital signatures. But digital signatures seems to be more geared toward encrypting a document for communication, rather than guaranteeing the integrity?

Banana
2008-06-04, 13:45
*bump*

Anyone, maybe?

ast3r3x
2008-06-15, 13:00
Sounds similar to the halting problem :p

bassplayinMacFiend
2008-06-15, 18:59
Digital signatures is a common use of encryption. Sometimes, the message is left unencrypted, but is signed by your public key so when you get it you know the message hasn't been tampered with. This still might not help if the remote program is cracked because a good cracker would make sure any response sent the expected info (as expected).

Banana
2008-06-15, 19:11
Another consideration is that this isn't really about encrypting a message but rather verifying the messenger's identity- is that program actually the one I designed and implemented or has it been tampered with? Only surefire method would be to read the binary, bit for bit, but I'm afraid that would be a bit excessive and quite time consuming. That's why we ended up with checksums- to provide a fast check, but at loss of guarantee that it's the identical thing.

It does look indeed like a problem that hasn't been solved yet; that's okay because I was aiming to make sure I didn't miss something in understanding the security I want to implement.

Dave
2008-06-15, 21:39
I'm hoping someone can direct me to a good source of information about verifying whether the calling application has not been tampered with. I thought of checksumming it, but the problem is that I have no guarantee that the checksum being passed is actually the checksum of the calling application.

You could open a telnet session and run the checksum yourself. Of course, if they can compromise one app, they can probably compromise another, but it's one more thing for them to get right.

Koodari
2008-06-17, 04:18
Another consideration is that this isn't really about encrypting a message but rather verifying the messenger's identity- is that program actually the one I designed and implemented or has it been tampered with? Only surefire method would be to read the binary, bit for bitThe devil is in the details. How do you propose to read the binary bit for bit? Surely not by having it read itself to you? (Then the tampered software will just include the entire untampered software for exactly that purpose.)

The key thing you do not say in your question is, can you trust the system the other piece of software is running on, or not?

If you can trust the system, then optimally you'd want to go through the system to verify that the (running!) binary is correct. You'd also want to verify that that exact process is the sole owner of the communication channel, such as port, you intend to use to talk to it.

If you can't trust the system, then that software is never completely, theoretically secure. What remains is the possibility to obfuscate it to make it more difficult to tamper with.

Here's a little something about real-world measures against software modification when that software is running on an untrusted system.
http://www.gamasutra.com/features/20000724/pritchard_01.htm

spotcatbug
2008-06-17, 06:54
I haven't followed this whole thread, but... would the .NET code signing stuff be of any use to you? In Visual Studio, in project properties, there's a "Signing" tab.

Banana
2008-06-17, 11:29
The devil is in the details.

Very true.

The key thing you do not say in your question is, can you trust the system the other piece of software is running on, or not?

Unfortunately, the answer would have to be no. While technically, this is more of a Windows administration issue than software issue, I cannot guarantee that a computer isn't running a keyboard logger or other malware peeking in the untampered software, which would pass just any legit check I throw at it, even the most obtuse and obtrusive bit-for-bit check.

If you can trust the system, then optimally you'd want to go through the system to verify that the (running!) binary is correct. You'd also want to verify that that exact process is the sole owner of the communication channel, such as port, you intend to use to talk to it.

I didn't consider that, but good point.

If you can't trust the system, then that software is never completely, theoretically secure. What remains is the possibility to obfuscate it to make it more difficult to tamper with.

I suspected that would be the case. However, I don't believe that obscurity is security. Yes, it may slow tampering, but at whose expenses? More likely, the expenses will be spent on the developer (me) or my users, and not against actual tampering.

Here's a little something about real-world measures against software modification when that software is running on an untrusted system.
http://www.gamasutra.com/features/20000724/pritchard_01.htm

Thanks for sharing. Reading that article, I had the feeling that he was quite astute on identifying the problems but long on the tooth regarding solutions. It didn't struck me as 100% effective. It's quite trivial for them to stop outright cheating, but when cheating takes form of simply peeking or acting on generally unavailable information, it becomes difficult to control. Just like the scenario of keyboard logging I described above.

Of course, the impetus would be upon the IT dept to set the policy to keep the system clean so there is a trust on system to tell me that this hasn't been tampered with. But trust is still there, and that's not really good.

I haven't followed this whole thread, but... would the .NET code signing stuff be of any use to you? In Visual Studio, in project properties, there's a "Signing" tab.

The another thing about digital signature is that it's there, so I can just take it and pass it as the signature for the tampered software, just like Koodari's scenario of tampered software keeping a copy of untampered software to pass the check then hijack the communication after that point.

The way I look at it, the best method I know of is to have an incomplete signature, with the crucial part being supplied at runtime by the user (ideally, without explicit knowledge on the user's part). This is actually what I do with users' SSH keys. I save it with a passphrase, which is basically a hash of the login password they used, so without that password, the attempt to crack the encryption will be difficult because the passphrase is totally random (well, near totally random, actually), and even if they had the keyboard logger logging the password at the login, they couldn't use that password to open that SSH key directly- they still have to go through that login form to get the salt+hash which is the actual passphrase for that key.

But I'm sure there are flaws in the model I set up; they could still get the information from the memory heap for example or just tamper with the software requiring the log-in to leak the actual passphrase, etc.

The goal of secure software may be very well a fool's errand, but my intentions is more to be able to document the threat model and identity the trusted systems and tell them that yes, I did my homework and this is the best thing we can come up with.