Forums

The forums ran from 2008-2020 and are now closed and viewable here as an archive.

Home Forums Back End Script execution time isn't correct

  • This topic is empty.
Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #174352
    fooman
    Participant

    I have a test page that connects to a third-party DB, executes a stored procedure, and then spits out results.

    I can literally see that page takes about 8-10 seconds to execute. Maybe more.
    I try to time it in PHP but I get a 2 second result from the microtime() function.

    Here is what I’m doing (I’ve stripped out the execution code to focus on the timing stuff).

    $startMS = microtime(true);
    
    // Do stuff with DB
    
    $finishMS = microtime(true);
    $totalMS = ($finishMS - $startMS);
    echo "Started: ".$startMS.'<br>';
    echo "Ended: ".$finishMS.'<br>';
    echo "This page was created in ".$totalMS." seconds";

    I literally SEE the page take a long time to show anything, but $totalMS returns 2.xxxxx seconds or so. Am I doing anything wrong? Should I go about timing the script another way?

    #174368
    __
    Participant

    A few possibilities:

    • the HTTP request or response (i.e., upload/download times) is hanging somewhere or taking a really long/slow route
    • there is some other resource on the page that blocks rendering while being loaded (e.g., CDN-hosted scripts, etc.)
    • your script is doing other stuff before or after you time it
    • Apache (or whatever server you use) is taking a long time to process the request and invoke the php engine (this might be the case on a misconfigured server, or on a shared host that has too many accounts or very high traffic)

    You could check the server response times using a program like traceroute (on linux; I’m sure there are Windows equivalents). Basically, just time a blank request/response.

    (You say “third-party DB” — are you including the time it takes to connect/ disconnect in your test?)

    To see if it’s activity other than your DB calls, literally remove everything else from the script and HTML response (a reduced test case —if you haven’t done this already).

    As for checking Apache loads and request times, you’ll probably be at the mercy of your host (or whatever they pass off as “tech support”). Not to be negative, but it’s flat-out unlikely that you’ll get anything useful. Instead, you might ask them to put you on another, less-crowded server and see if the problem persists.

    I think it’s very unlikely that PHP’s microtime is not returning the correct times. It gets the time from the host system, so if it’s wrong/ slow, it would mean that the server’s clock is wrong/ slow. I suppose you could check by getting the timestamp a few mornings in a row, at the same time, and checking that the intervals are (reasonably) equal.

    #174382
    __
    Participant

    I’m using an IIS server set up and controlled by our client.

    My condolences (on all fronts).

    So perhaps it is the number of records…? I thought views were quicker than queries and would store temp results… I’m appreciating any insight anyone has one the reasons why a stored procedure takes that long to run. I didn’t think 22k records was a whole lot :/

    Well, 22k rows is nothing if the database and queries are well-designed and indexed. On the other hand, 100 rows might be an impractically huge query if the DB is poorly optimized.

    afterthought: how large is the data you’re retrieving?

    Did you build the DB (from your comments, I get the impression the DB was preexisting, but you might have written the particular stored procedure)?

    I might be able to offer some insight if I saw the stored procedure you’re using. I don’t work with MSSQL specifically, but I’m good with SQL in general and relational design.

    #174394
    __
    Participant

    Now my question is, if I’m searching for JUST one piece of data (the ID of a table), is 500k rows a lot?

    Are you actually selecting all those ids? A result set like that could be several MBs in size, depending on the data type of the id column. The query itself should still be fairly quick, but transferring the data might take a noticeable amount of time.

    Or are you looking for a specific id from a table with 500k rows? Assuming the id is indexed, finding it should be nearly instantaneous.

    Is there a where clause or other conditions in your query? If so, make sure the table’s indexes also cover those conditions as much as possible.

    You also mentioned “inner select” —SQL engines can optimize some inner queries, but not others. Or, certain types of inner queries can be run once for the entire query, where others trick the engine into running them once per row. (This is the same problem people often run into with joins: joins aren’t slow in and of themselves, but it’s easy to trap the DB into executing them very inefficiently.) If you’ve got one that can’t be optimized, you should look into how to get it optimized. This will be very MSSQL-specific; I’m afraid I couldn’t offer much advice on it.

    I started by searching for a single piece of data from a single table. It took the same 8-10 seconds

    This would make me worry that there is no index for the data you are searching, forcing the DB to scan the entire table. See if you can add an index that covers your search.

    I’m sure MSSQL has something similar to EXPLAIN. Results from explains can be difficult to read/understand (I hate them), but there’s usually something you can combine with super-googling and get something helpful in return. Finding the right words to search S/O could be very helpful as well.

Viewing 4 posts - 1 through 4 (of 4 total)
  • The forum ‘Back End’ is closed to new topics and replies.