TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Internal Error)

asked 2018-12-21 01:54:55 +0000

net_tech gravatar image

updated 2018-12-21 03:43:08 +0000

Hi,

Nginx is running on CentOS as a reverse proxy with a public cert. When devices connect to the service they fail with the following errors.

RC:-500 MGMT_SSL:tera_mgmt_ssl_open_connection: SSL V3 cannot be set as min SSL protocol version. Ignoring.
RC:-500 MSS:(CERT_checkCertificateIssuer:1289) CERT_checkCertificateIssuerAux() failed: -7608
RC:-500 MSS:(CERT_validateCertificate:4038) CERT_checkCertificateIssuer() failed: -7608
RC:-7608 MGMT_SSL:tera_mgmt_ssl_open_connection: SSL_negotiateConnection() failed: Unknown Error
RC:-500 WEBSOCKET:tera_mgmt_ssl_open_connection failed (ssl_session_id: 4)

Software vendor was unable to help, so we turned to wireshark.

Looks like we are breaking right at the certificate key exchange

Google shows several posts with the same issue, however no solution is offered. TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Internal Error)

Any suggestions on what to check are greatly appreciated

Content Type: Alert (21)
Level: Fatal (2)
Description: Internal Error (80)

image description

ssl_ciphers  ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-CBC-SHA256:ECDHE-RSA-AES256-CBC-SHA384:DHE-RSA-AES128-CBC-SHA256:DHE-RSA-AES256-CBC-SHA256;

Client shows the following ciphers in the Hello
image description

Server offers: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)

edit retag flag offensive close merge delete

Comments

As is often the case, troubleshooting by screenshot of a a few columns from a capture is a frustrating exercise. Can you please provide the capture?

If not, who is providing the Fatal alert, client or server? If the client I suspect there's something it doesn't like about the server certificate.

grahamb gravatar imagegrahamb ( 2018-12-21 10:37:33 +0000 )edit

Sorry, yes we tried to sanitize with TraceWrangler but the output file becomes useless after sanitizing.

The fatal alert is from the Client and we were capturing on the server side. Wildcard certificate from GoDaddy is being used.

net_tech gravatar imagenet_tech ( 2018-12-21 11:40:12 +0000 )edit
1

I think you'll have to debug the client, is it a browser? If so have you tried another? Can you use openssl s_client ... to make a debuggable connection?

grahamb gravatar imagegrahamb ( 2018-12-21 11:50:04 +0000 )edit

no, the client is a teradici zero trying to establish a connection to it's management console over 5172.

if we try to access the url in ANY browser, we aren't able to reproduce the fatal alert. Chrome browser offers 17 cipher suites and agrees to use TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f) sent in the server Hello

http://www.teradici.com/web-help/TER1...

net_tech gravatar imagenet_tech ( 2018-12-21 12:01:33 +0000 )edit

Looks like an issue in the client then.

grahamb gravatar imagegrahamb ( 2018-12-21 12:08:24 +0000 )edit

Thanks for the openssl tip.

openssl s_client -connect my.domain.com:5172 -tls1_2 -showcerts
CONNECTED(00000003)
depth=2 C = US, ST = Arizona, L = Scottsdale, O = "GoDaddy.com, Inc.", CN = Go Daddy Root Certificate Authority - G2
verify return:1
depth=1 C = US, ST = Arizona, L = Scottsdale, O = "GoDaddy.com, Inc.", OU = http://certs.godaddy.com/repository/, CN = Go Daddy Secure Certificate Authority - G2
verify return:1
depth=0 OU = Domain Control Validated, CN = *.domain.com
verify return:1

Now I am wondering if it's an Nginx misconfigurataion or as you said the problem with the zero not sending the Client Key Exchange, Change Cipher Spec Encrypted Handshake in the next expected packet for some reason?

https://stackoverflow.com/questions/3...

net_tech gravatar imagenet_tech ( 2018-12-21 12:43:11 +0000 )edit

Personally I still suspect the client, as everything else is happy with the server config and the client sends the alert immediately after getting the cert.

grahamb gravatar imagegrahamb ( 2018-12-23 10:49:05 +0000 )edit

The client was breaking because the certificates were incorrectly placed in the crt file. The Root cert was inserted before the intermediary, when it should have been the last.

net_tech gravatar imagenet_tech ( 2018-12-24 03:30:38 +0000 )edit

Interesting result, RFC 5246 (for TLS 1.2 which you seem to be using) says this about the list of certificates:

certificate_list
      This is a sequence (chain) of certificates.  The sender's
      certificate MUST come first in the list.  Each following
      certificate MUST directly certify the one preceding it.  Because
      certificate validation requires that root keys be distributed
      independently, the self-signed certificate that specifies the root
      certificate authority MAY be omitted from the chain, under the
      assumption that the remote end must already possess it in order to
      validate it in any case.

So it appears that the Zero client was the only one that actually required the certs in the order specified in the RFC. From the RFC it appears that the root certificate can be omitted from the list rendering the order mute.

grahamb gravatar imagegrahamb ( 2018-12-24 07:53:45 +0000 )edit