Database Wait Issue - BC 20.1 On Prem

Having an issue with our on-prem setup. We are currently seeing a high amount of ASYNC_NETWORK_IO waits. The DB and BC servers are VMs in the same cluster. I’m not sure if they’re on the same server or not, but I’ve asked my infrastructure team to check. The code that is causing it is part of the InsightWorks Advanced Inventory Count extension. Any suggestions on what we should be looking at would be appreciated. Neither server is showing more than 50% CPU or RAM usage while this is happening.

Hey Bryan -

Has this been this way since Go Live?

Were there any changes to firewall / ant-virus infrastructure recently? Any changes to network setup? You can have your infrastructure team run a network trace to see what the packet transmits look like.

Thanks,

No real change. We don’t do large inventory counts that often (once a quarter). This is the first time I’ve actively been trying to monitor the database while we were performing this to see what’s going on. In the past, we’ve just run this before going home for the night and checked up on it later.

ASYNC_NETWORK_IO does not always mean there is a problem. Often it is more about how NAV\BC interact with SQL. NAV\BC is not a true SQL application. Meaning it was not designed from the ground up for SQL. Rather it has been adapted for SQL. Granted that adaption has gotten better over the years.

ASYNC_NETWORK_IO basically means SQL is waiting on a response from the client. SQL places query results on an output buffer. Then waits for the cleint to fetch and acknowledge them. Since BC functions around record sets and not result sets, this may be delayed depending on what BC does with the results. That acknowledgement may not occur until the function completes. This can lead to excessive locking and performance issues. But often this can have more to do with what BC is doing, than nextworking or systems issues.

1 Like

@Bryan_Christian I agree with @bbrown5962 - If there were no changes, it could be a performance issue with the ISV itself. I’d see if your partner can open a ticket with Insight Works to see if they can troubleshoot the performance issue. They may have a newer version of the app that fixes some of the performance issues you are experiencing.

I’ve got a case open with our partner, but I haven’t had time to schedule going through this process in our testing database yet. We just asked them about updates to the extension in the process of trying to deal with other issues related to the stock check, so I think we’re current. I’ll update if we sort out something else that we needed to be focused on.

Just realized I didn’t update here. I’ll make it a long story short, our backup admin had been overlooking a warning message that Veeam was generating saying it was failing to truncate the log files. Due to the size of the log files, quite a few transactions were taking longer than they should have while updating the log files. We actually had this particular issue take about 1/4 the time or less once we got that corrected and got the log files back down to reasonable size.

1 Like

Hey Bryan! Glad to hear from you. Thank you for reporting back that’s a good find. I hope you are doing well!