Today I talked with Javid, showing him the script wrapper I have done so far.
Network upload/download: cURL
Currently, memory workload works fine with lookbusy (but still needs validation). Network workload works fine, but there is a problem that there is CPU overhead at speeds greater than 1 MB/s with Curl.
CPU workload works, but more calculations must be made to ensure that the CPU overhead from other processes do not interfere too much with the workload generated. The main concern is the CPU overhead from cURL. Javid suggested that I do some tests with cURL, find the amount of CPU overhead created by cURL at different rates, and subtract it from the CPU workload created from the program.
FileIO workload was done incorrectly. It has come to the point where I will have to create my own program that so be able to control disk read and write.
This program, should potentially use a timer (say, every second) and perform the read and write operations every second. For Disk read, Javid gave the example of creating a 1GB garbage file, with each line in the file being a 1MB string. The program can then randomly read multiple lines in parallel to achieve a controlled disk read speed. For example, if we wanted 5MB/s, the program should pick 5 random lines in the 1GB every second, read them, then pick another 5 random lines, and repeat. By picking lines randomly, we attempt to avoid problems created by the OS caching.
For disk write, the program should write a 1MB string in a line to a file, then flush it to make sure Linux actually writes it down, every second. This should be done in parallel for higher write speeds.
Hopefully all this should be done by the weekend. Javid wants me to finish this off by the weekend, even if not fully complete.
Afterwards, I will have to build my virtual machines, and think about the future experiments i will be conducting.
The virtual machines have to be done by next Thursday, as I will be handing them in so they can be put into the system.