Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

If generating the report directly on the server is taking 1 min 30 sec, then chances are high that there are a lot of individual queries being made to generate the report. When generating the report over the network, a round-trip time is added to each individual query. So if there were 50000 queries and the roundtrip time was 2 ms, 100000 ms are added to the time it takes to generate the report.

On top of that, there can be a low "fetch size", this means that for one query, the result is not sent in one response, but has to be fetched block by block, adding a roundtrip for each block, making the problem even worse.

Have a look at your capture file and check how many times you see a TCP time delta of approximately the initial roundtrip time (calculated from the 3 way handshake and added to each TCP packet for your reference). Then do the math on how much extra delay is introduced from these roundtrip times and if it matches the time difference between running the report locally on the server or remotely from the client.