AIF ERROR - An existing connection was forcibly closed by the remote host

We have a AIF Service (WCF) in AX 2009 which receives some data from an external system(Client), updates DAX and returns the number of records updated.

Occasionally, the client returns the error “An existing connection was forcibly closed by the remote host”. When we look at the data in DAX, it appears to have been updated successfully. Also, our process saves copies of the request and response messages on a network directory, and the response message is not showing up when this error occurs.

We have two AIF servers which are load balanced, so we thought that might be a factor, so we changed the config on the client to always send the requests to a specific AIF server, but we still get the errors no matter which server we point to.

Again, the error does not occur every time. Over the last year or two, there have been times when we only see it once a month, and other times it will occur several times a day. The last few days we’ve been seeing it on about 50% of the requests, which occur every 10 or 15 minutes.

Need help to fix this issue thanks in Advance…!

ACTUAL EXCEPTION (ERROR) MESSAGE :

  1. Exception System.ServiceModel.CommunicationException: An error occurred while receiving the HTTP response to http://myservice.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details. —> System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. —> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. —> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

— End of inner exception stack trace —

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size)

at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead)

— End of inner exception stack trace —

at System.Net.HttpWebRequest.GetResponse()

at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)

— End of inner exception stack trace —

Server stack trace:

at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason)

at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)

at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)

at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout)

at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)

at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)

at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)

at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)

Exception rethrown at [0]:

at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)

at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)

at Director.UpdateIds(Update.ServiceIdsRequest request)

at ClientDirector.UpdateIds(Update.ServiceIdsRequest request) in c:\ABCDirector\Service \Reference.cs:line 1523

at Director.DAXUpdateServiceIDs…Client.Update.ServiceIds(Axdabc.DaxSvcId abc.DaxSvcId) in c:\ABCDirector\Service \Reference.cs:line 1529

at Director.GetNewOrUpdated.sFromDaxTransaction.Send.ServiceIDsToDAX(TransactionDetail requestTransactionDetail) in c:\ABCDirector\ABC.Director\GetNewOrUpdatedDax.cs:line 247

  1. Exception Message An error occurred while receiving the HTTP response to http://myservice.svc. This could be due to the service endpoint binding not using the HTTP protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the service shutting down). See server logs for more details.

  2. Exception StackTrace

Server stack trace:

at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason)

at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)

at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout)

at System.ServiceModel.Channels.SecurityChannelFactory`1.SecurityRequestChannel.Request(Message message, TimeSpan timeout)

at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout)

at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)

at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)

at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)

Exception rethrown at [0]:

at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)

at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)

at Director.UpdateIds(Update.ServiceIdsRequest request)

at ClientDirector.UpdateIds(Update.ServiceIdsRequest request) in c:\ABCDirector\Service \Reference.cs:line 1523

at Director.DAXUpdateServiceIDs…Client.Update.ServiceIds(Axdabc.DaxSvcId abc.DaxSvcId) in c:\ABCDirector\Service \Reference.cs:line 1529

at Director.GetNewOrUpdated.sFromDaxTransaction.Send.ServiceIDsToDAX(TransactionDetail requestTransactionDetail) in c:\ABCDirector\ABC.Director\GetNewOrUpdatedDax.cs:line 247

  1. Exception System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. —> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. —> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

— End of inner exception stack trace —

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size)

at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead)

— End of inner exception stack trace —

at System.Net.HttpWebRequest.GetResponse()

at System.ServiceModel.Channels.HttpChannelFactory`1.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout)

  1. Exception Message The underlying connection was closed: An unexpected error occurred on a receive.

  2. Exception System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. —> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host

at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

— End of inner exception stack trace —

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size)

at System.Net.Connection.SyncRead(HttpWebRequest request, Boolean userRetrievedStream, Boolean probeRead)

  1. Exception Message Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

  2. Exception System.Net.Sockets.SocketException (0x80004005): An existing connection was forcibly closed by the remote host

at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)

at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)

  1. Exception Message An existing connection was forcibly closed by the remote host

Consider enabling tracing on server; it may give you more information.

Hi Martin. I work with Bobby. We have tracing enabled, although it hasn’t revealed anything definitive, we are thinking it is a timeout error.

On the client side, we only had the sendTimeout set (at 25 minutes…it’s timing out in 5). So I changed that config to set the open, close, and receive timeouts as well.

We’re still getting the error though, so I’m wondering if there is a way to set those timeouts on the AIF side. I don’t see a way to do it in the Microsoft Service Configuration Editor that comes up from the AIF Services screen in DAX (we’re on AX2009 btw), although I’m sure I could be missing something.

I could edit the web.config on the AIF directly, but I’m afraid that would get overwritten next time we use the Configuration Editor from within DAX to do anything.

Does it mean that having the service running for more than five minutes is a valid scenario? Don’t you want to split the work to smaller parts or use asynchronous processing?
If you decide to increase the limit above five minutes anyway, please tell us which timeout you can’t set through the Service Configuration Editor.

Buy the way, according to documentation, exceeded timeouts should be logged as warnings. Review your logs (and tracing setup) once more.
My experience with WCF tracing it that it’s very detailed, so I’m surprised that you didn’t get any single piece of useful information from it. Maybe it’s something obscure, but if it really is a timeout, you should see it there.

Ah, I see where the timeouts are set now in the editor (under the binding). I was looking for it on the host (where there is an open and close timeout) and on the binding endpoints. We needed the send and receive timouts.

It may not be a WCF timout. Dax appears to be updating the records as requested, but we never see the response message in the trace, and we also don’t see the response message XML which our process saves out to a network directory.

The reason we suspect timeout is because on the client side, the error message about the existing connection being forcibly closed seems to come very close to five minutes after the request was sent. Not precise to the second, but consistently within 4-6 minutes. But neither the client nor host WCF logs show any warning message.

The problem is intermittent, and when it does work the responses come back in less than a minute. So we’re playing with timeout settings as an attempt to back our way into discovering what the problem really is by eliminating possibilities. Now that I’ve found the timeout settings on the bindings, I’m thinking maybe we’ll change the sendTimeout from 1 min to 5 or 10, just to see if it makes a difference. It probably won’t, since as you mentioned a real WCF timeout should leave a warning message in the trace logs.

We’re going to meet later this morning to examine what we know so far and bang our heads together to see if we can figure out what’s really going on. I’ll post our solution if we ever find one.

Thanks for your suggestions.

It’s looking like this was ultimately down to a bit of inefficient code in the service class on the DAX side. Seems joining the salesLine to itself in the same select isn’t a good plan.

Still not sure why it didn’t result in an actual “timeout” error in the WCF traces, nor why the error on the client side showed up at the 5 minute mark, when none of the timeout settings were set to 5 minutes, but since the problem is solved, I’m content to allow those mysteries to remain.