ext_17567 ([identity profile] mmcirvin.livejournal.com) wrote in [personal profile] beamjockey 2005-11-29 02:30 pm (UTC)

For example, one could set up a thought or even a practical test with a primitive “toy” computer, perhaps modeled along the lines of the first Illiac and have programmers unfamiliar with the Illiac system try to hack the program.

For this to be a good analogy, they'd have to be able to hack it without ever being allowed to see or interact with the computer. Corrigan's first step—bootstrapping to a working program—already requires the results of his second step, in which the program analyzes the system it's in. Unmotivated biological viruses arise out of a biological environment in which the host organisms are already present.

So it seems to me that this is about as likely a danger as the aliens simply beaming invasion soldiers over physically with a Star Trek transporter. If they're godlike enough to do things that can't be done according to any Earth conception of possibility, how would you possibly plan against it? What if they write a signal so cunning that it is able to reconstruct the missing information after you denature it? What about a signal that turns into a shark and eats you? A signal that makes 2+2 equal 5 so that the laws of physics break and you explode?

Now, the more interesting possibility that I think is a real danger is something like the A for Andromeda scenario, in which the message starts with social engineering assuming that something like an intelligent being is reading it. Begin with schematics for a physical machine, Contact-style, or even an abstract description of a processor with an instruction set; and then once the silly monkeys build that machine, it can do the malicious work. Much more likely than the signal doing it all to the receiver's controller upon receipt.

Post a comment in response:

This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting