At the end of the book, the Monolith is set to destroy humankind, owing to its parent civilisation's data on us dating from 2001, just before we ceased being a violently self-destructive society and became a mature and peaceful one.
The proposed "solution" is to infect the Monolith with the deadliest computer virus known, thereby preventing it carrying out its task.
Unfortunately this is totally unrealistic for several reasons:
1) It presumes the Monolith's AI to be *exactly* the right degree of intelligence -- smart enough to work out that this bunch of data is computer code, *and* what kind of system it needs to emulate in order to run it, but *not* smart enough to realise that the code in question is malicious.
2) It needs the Monolith to realise that we want it to run the code, but not to suspect our motive for wanting it to do this -- very unlikely, given that it has us tagged as an aggressive/hostile race.
3) Most seriously, it shows that Clarke was totally unaware of the single most important thing about emulation -- that the emulated machine runs in a "sandbox", and anything which goes wrong with the emulated system (whether through accident or malice) affects that system only, not the system running the emulation.
So the virus would in real life either have no effect whatever, or provide another bit of evidence that we need (and deserve) to be destroyed.