KevinH listed many possible limiting factors for production systems (server specs question, but did not give any hard data or even general ideas on how much each factor affected service/performance. This is understandable as the interactions can be varied and complex. He also gives the hardware specs for a production system, but these specs are useless without knowing the factors he listed (e.g. how many users? how much use?).
Could we possibly get a few production system profiles? Hardware plus usage and performance stats? That way we can approximate new systems to some degree. Squid is another project that has way too many factors to easily predict load/performance; and by having a library of example production system profiles they do their best to help sysadmins. The squid people also have a good document outlining general principles for cache design--telling us which factors are the most important in different scenarios, giving tips, illustrating good design, etc.
E.g. Hypothetical Example:
Hardware/OS--dual Opteron 3100s, 4GB Ram, 6X200GB software RAID5, FC3 with stock SMP kernel, only services running are Zimbra related.
Usage--2,500 users, 8am--6pm heavy usage by all of them, they all use the AJAX web interface, no POP, very little IMAP, mixed http/https, quota limit of 10GB, average inbox size of 200 messages, average total disk use of 50MB, total messages sent internally 70,000, sent externally 15,000, received externally 30,000, antivirus is ON, antispam is ON, avge # attachments received every day is 1,000, average # spam every day is 15,000.
Performance--at 2250 users we began to notice a speed drop. We plan to put in another identical server in the zimbra cluster, migrate 500 users off of the old server, and put all new users on it.