Written by: Szilárd Pfeiffer, Security Evangelist & Engineer, Balasys
Google lists 12,400,000 results to the search of "malware detection tools." Is malware detection a silver bullet, or is there a smarter method to prevent malware attacks? We believe there is one.
Though we would not argue against the importance of detecting malware, there should also be a cheap and effective step before detection, namely prevention. A malicious email that is never delivered to the recipients will never cause security issues. According to CSO, 94% of malware in 2020 was spread through the email system. Email, which remains an essential tool and still forms the basis of many business processes, uses one of the oldest and least secure communication protocols on the internet. Even though there are several ways to apply security policies on the mail transfer protocol to prevent the delivery of suspicious mail, these methods are not nearly as widely declared and enforced as they should be.
The security of the other commonplace, older protocol, HTTP, the basis for the World Wide Web, has been in focus for more than ten years, leading to significant progress in this area. It is enough to mention the fact that HTTP has been almost completely replaced by HTTPS, an encrypted version of the same protocol that provides confidentiality, integrity and authenticity during data transfer. It should not be forgotten that just a few years ago, several sites sent credentials, customer data, or payment information on the internet without any kind of encryption. Post CCPA, GDPR, and other similar regulations, this is now almost unimaginable. A simple news site without any kind of authentication now uses HTTPS, quite rightly to avoid modification of content during transfer, just for instance. Try to imagine the potential consequences of a man-in-the-middle attack on one of the most popular news sites. As many business processes use the web and the protocol itself could not be improved, the ecosystem as a whole had to be improved to prevent downgrade attacks, clickjacking, cross-site scripting, and other malicious activities. But how secure is the transfer of electronic mail? Email is unquestionably still the basis of many formal and informal business processes, and so as long as this is the case, it is going to be the target of malicious parties. The answer to the issue of email security is a simple one, but not so encouraging. The state of mail transfer security is not nearly as good as in the domain of web page transfers.
There are significant differences between mail transfer and web page transfer. Web page transfer is usually performed between a web client (browser) and a web server. Mail, on the other hand, is not usually transferred to the recipients by a mail client, but instead to the server of the sender’s mail provider. It then passes the mail to the mail server of the recipient’s email provider, and then finally the recipient will download it asynchronously from that server. As you can see, there is a server-server communication that cannot be influenced by either of the clients. The details of that communication are determined by the participant servers and the simple mail transfer protocol (SMTP), which was designed at the beginning of the 1980s to be simple, not secure.
Originally, the protocol did not contain any encryption capability. However, the opportunistic use of TLS was introduced in 1999, though only as an optional extension. Twenty years on, a publicly referenced server still must not be required to use the mechanism to deliver mail locally according to the related RFC. It means that the owner of a recipient domain, notwithstanding a legitimate interest, is not allowed to require encryption during the mail delivery process. It depends on the sender’s goodwill to initiate encryption. Without it, both client and server are exposed to a malicious third party with man-in-the-middle capability that can enforce a lack of encryption, independently from the fact that both parties intend to encrypt. Without initiating encryption, the recipient cannot authenticate the sender and vice versa and beyond this, the mail transfer is vulnerable to eavesdropping and also content modification. Under such circumstances, the recipient may believe a sender is trusted and therefore believe that the attachment is trusted. However, the attachment could potentially be malware forged into the mail by a malicious third party.
There are several mechanisms like BIMI, DANE, DKIM, DMARC, DNSSEC, MTA-STS, SPF, TLS-RPT, which have aimed to remedy the situation, with a greater or lesser degree of success. We will examine the theories behind these mechanisms, their practical limitations, configuration difficulties, and the achieved effect.
In general, it can be said that the most important limitations are prevalence and enforceability. As can be seen from the chart, there are significant differences in prevalence if we compare the top 1,000, 100,000, and one million domains of the Majestic Million. The prevalence is much higher among the most popular domains than the others. The reason cannot be only the fact that the Fortune 1,000 companies enjoy much better financial conditions, as these mechanisms can be implemented without significant cost. The lack of knowledge could be one reason, even though there are several guides and easy-to-use tools available to get the necessary configurations to enable these mechanisms. Another reason might be the fact that there is no regulatory body that can enforce the usage of these mechanisms or the execution of the policies declared by these mechanisms, only recommend them, as NIST does. The last reason to mention is that domain owners also have no power to force the execution of the declared policies on the mail systems owners of the recipient side. In any case, big tech companies like Google, which has a significant market share of the mail service market, have begun to put pressure on domain owners who want to provide mail service for their domain to publish policies, although they do not necessarily strictly comply with them.
In summary, the lack of these mechanisms on the sender side means a relatively high risk that emails sent from the domain are going to be put in a spam folder or rejected, at least by the biggest mail providers. The lack of checking of the policies declared through these mechanisms on the recipient side may mean a relatively high risk of incoming mails coming from unauthorized, or even malicious actors who can impersonate trusted parties. Usually, impersonation is the first step of a targeted attack where fraudsters can inject misinformation or bogus content such as scams, viruses, spyware, and malware into the organization.
The sender side can publish a policy through this mechanism, declaring which servers are allowed to send mail in the name of the given domain. The recipient side can use the published policies to check that the incoming mail is actually sent by an authoritative server. The policy itself is published in a TXT record of the given domain (e.g. example.com) and the servers can be declared by their IP addresses or indirectly, referring to other DNS records. The simplest case, at least in this respect, is that the domain owner maintains its email server and the MX record of the domain contains the IP address(es) of the mail server(s), so only the MX record should be referred to. From other points of view, it may seem easier to use a software-as-a-service solution to implement your mail service. In that case, you can include the policy of the SaaS provider or you can simply redirect to it.
example.com. IN TXT “v=spf1 mx -all” example.com. IN TXT “v=spf1 include:_spf.google.com -all” example.com. IN TXT “v=spf1 redirect:spf.protection.outlook.com -all”
The tricky part of a sender policy framework is not the explicit declaration of the servers, but the declaration of the default action. Default action answers the question of what should be done when the recipient realizes that there is no explicit rule that matches the IP address of the actual sender. For this, a special mechanism (
all) can be used optionally with a qualifier. All the examples above use the fail (
-) qualifier, so the evaluation result of the Sender Policy Framework will fail for clients not explicitly stated in the policy, meaning that they are not authorized to use the domain.
Although default fail is considered the most secure method, only less than one-third of the SPF records use it. This may be because this is not the most convenient method from the perspective of maintenance, as the lack of an IP address may cause failure in delivery.
There are two other options to avoid this problem. Domain owners can explicitly state that they are not asserting whether the IP address is authorized or not. This can be done by adding the neutral (
?) modifier. This might seem convenient in terms of the maintenance side, but has almost no added value on the security side. The result will be the same if the default is missing and the case is similar when the whole SPF record is missing. Counting all the MX records, not only the ones which have SPF TXT records, the ratio of neutral modifiers with a default value (all) is more than one-third. More than half of the SPF records use the soft fail modifier (
~) when declaring the default behavior, meaning that the host is probably not authorized. From a security perspective, this is just a little more than if it were neutral.
Why is default action so important? Just imagine a firewall policy that contains a default rule declaring what should happen when no rule matches on the traffic explicitly when it could say that the traffic is not asserting or probably not authorized. It would be a security nightmare. Domain owners should know which servers are authoritative and should declare it strictly. In that case, the recipient can be sure that the sender is authoritative and the mail from a representative of a trusted company really comes from them, and as a result, avoid being a victim of a phishing or social engineering) attack. Though strict rules may affect business continuity – something which must be taken into consideration – this issue should be handled by another mechanism, like DMARC, not by making the policy more permissive.
A severe limitation of the Sender Policy Framework is that servers are authorized by their IP address(es), which is vulnerable to IP spoofing, and they could be declared indirectly referring to IP addresses by hostnames or other DNS records which are vulnerable to DNS spoofing. As the policy is stored in a DNS record, it should be transferred in a way that can guarantee the integrity of the data: without DNSSec or at least DNS over TLS/HTTPS, the policy information is vulnerable to tampering and altering. The most serious issue with the Sender Policy Framework is that while its prevalence is relatively high, the really important question is what the ratio of the enforcer servers is. In any case, though the Sender Policy Framework can improve the level of security, publishing a policy has only a modest effect, which means that it should be combined with other mechanisms as it can prove neither the integrity nor the confidentiality of a mail.
In comparison, a Sender Policy Framework makes it possible to determine whether the server is authoritative to send an email in the name of the given domain, whereas the DomainKey Identified Mail method can be used to authenticate the mail itself independently from what the IP address of the sender was. The sender inserts a header (
DKIM-Signature) into the mail which contains at least the hash of the body (
bh), the signature of the content (
b), the list of the headers (
h) included when the signature is computed, the algorithm (
a) used to create the signature and location (
s) of the public key which can be used to verify the signature.
DKIM-Signature: v=1; a=rsa-sha256; d=example.net; s=brisbane; c=relaxed/simple; q=dns/txt; t=1117574938; x=1118006938; h=from:to:subject:date:keywords:keywords; bh=MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=; b=dzdVyOfAKCdLXdJOc9G2q8LoXSlEniSbav+yuU4zGeeruD00lszZVoG4ZHRNiYzR
The recipient mail server simply decrypts the signature with the public key found in the given location. The fact of the success of the decryption proves that the mail was signed by an actor who owns the private key related to the public key used to decrypt the signature. The decrypted value is the hash of the original mail that has been computed by the signer module of the sender server. The recipient server recomputes the hash and compares it with the result of the decryption. If the two hashes are identical, the mail body (including the attachments) and the listed headers are guaranteed not to have been altered by a third party. This essentially means that there is no malicious attachment in the mail unless the sender has attached it and nobody has modified any data in the mail, for instance, an account number, committing an email fraud.
Choosing the appropriate signature algorithm and public key size requires care with this mechanism, as a not-so-wise decision may make the efforts so far meaningless. The traditional RSA public key type, with at least a 2048 bit key size or any elliptic curve-based key with at least a 224 bit key size can meet expectations, according to NIST. As multiple keys can be set, you can configure either an RSA key for compatibility and an ECDSA for modernity and to enjoy the benefits of elliptic curve-based cryptography, like smaller key sizes. Signature algorithm SHA-2 with any message digest size could be suitable. However, SHA-1 is strongly contraindicated as there is a chosen-prefix collision attack against this algorithm. It needs a high amount of computational power, which makes a real-time break almost impossible, even taking into account the average delay time of a mail, though there is an opportunity of a replay attack, where the attacker can send almost the same mail later with a modified bank account number and payment deadline while the signature is still valid. The weak point of the DKIM mechanism is not the cryptography, but the fact that DKIM cannot provide confidentiality.
MTA Strict Transport Security is intended to solve the confidentiality issue of mail sending. Though the SMTP protocol originally did not contain any encryption capability, later an optional encryption capability was added as an extension, which is still not required. The situation is similar in relation to the unencrypted HTTP and the encrypted HTTPS protocols. An attacker with man-in-the-middle capability is able to force a client to use the unencrypted HTTP if the connection to the server was initiated with that protocol independently of whether the server supports the encrypted HTTPS protocol or not. It means that all the traffic communicated between the parties, which may contain sensitive information, can be intercepted by the attacker. There is a condition that must be met for a successful attack to be carried out, namely that the server must support the unencrypted version of the protocol. HTTP is not necessarily present, but SMTP always is, which means the information that encryption can be used must reach the client. This is the Strict Transport Security, which is implemented as a header in case of HTTP and a DNS record publication in case of SMTP.
_mta-sts.example.com. IN TXT "v=STSv1; id=20210602165800Z;"
During our investigation, we identified that most of the MTA-STS supporting domains use date in the id field. Assuming that the date in the id field is the publication date of the latest policy, and also assuming that the policies are not changing very often, a diagram can be drawn, in a cumulative manner, that shows how many MTA-STS policies were published. Though the curve may suggest a rise, the current prevalence does not even reach one percent.
Unlike the HTTP protocol, the SMTP protocol cannot be redirected to an encrypted channel, as SMTP protocol does not contain something like the “moved permanently” response status code in HTTP, so the information that the server wants to use encrypted communication must reach the client on an independent channel. In this case, DNS is used, just like in Sender Policy Framework or DomainKey Identified Mail. Similar to the earlier mechanisms, MTA Strict Transport Security is published in a TXT record. The difference is that only the fact of presence and current version are indicated by TXT records, but the policy itself is distributed via HTTPS from a ‘well-known’ source. The DNS record contains a short string(
id) that uniquely identifies a given instance of a policy so that senders can determine when the policy has been updated.
version: STSv1 mode: enforce mx: mail1.example.com mx: mail2.example.com max_age: 86400
The policy is similar to the Strict Transport Security header in the case of HTTP. It contains a list (
mx) of servers by which mails for this domain might be handled, with a lifetime value (
max_age) for the policy, meaning that a client can cache a policy for up to this value and a mode (
mode) indicating the expected behavior of the sending server in the case of a policy validation failure. The policy may declare that the sending server must not deliver (
enforce) the message to a server which fails hostname matching or certificate validation, or has no TLS capability. For testing purposes, there is a mode (
testing) that makes it possible to deliver mails despite the fact that there was validation failure, but a sending server can send a report about the failure (see TLS-RPT later). There is also a mode (
none) indicating that there is no active policy. Today, just over half of the owners of MTS-STS publishing domains want to enforce the policy, while the other half is presumably in a testing phase. However, it should be noted that the ratio of domains supporting MTA-STS in the top one million is only 0.7%.
Publishing MTA-STS policy is necessary, but far from sufficient. It should be enforced to minimize the risk of a MITM attack to ensure the unencrypted method of SMTP communication. Nonetheless, the existence and enforcement of the TLS protocol do not necessarily guarantee confidentiality. The quality and performance of the encryption is highly dependent on the details of the TLS settings. They should be determined and reviewed regularly with due care, bearing in mind the requirements of the related compliances (NIST, PCI DSS, HIPAA) if necessary.
TLS has a significant benefit, namely that it solves not only the problem of confidentiality but also the problem of integrity targeted by DKIM. However, it should be noted that the MTS-STS cannot guarantee confidentiality and integrity throughout the entire process of mail sending, despite Pretty Good Privacy (PGP) or Secure/Multipurpose Internet Mail Extensions (S/MIME) – just in respect of MTA communication, as the name suggests, so in the case of client (MUA) server communication, the same issues exist when a protocol has only opportunist TLS capability, just like IMAP and POP3. Disadvantages of the mechanism include the fact that it requires the maintenance of a web server to publish the MTA-STS policy independently of whether the domain owner intends to publish a web page or not, and also that it requires the maintenance of an X.509 certificate for that web server, as only HTTPS is allowed to download the policy.
Though the topics discussed so far have been strictly related to security, we should also consider business continuity. Introducing the discussed mechanisms carries a risk in terms of mail delivery. The risk cannot be transferred and should not be accepted, as it can be reduced by the SMTP TLS Report mechanism declaring reporting endpoints where the sending servers can report when experiencing policy validation failures. The policy can be published in one or more TXT records that contain the aggregate report URI (
rua), which can either be an email address or an HTTPS endpoint.
_smtp._tls.example.com. IN TXT "v=TLSRPTv1; rua=mailto:email@example.com" _smtp._tls.example.com. IN TXT "v=TLSRPTv1; rua=https://reporting.example.com/v1/tlsrpt"
One advantage of the mechanism is that it makes it possible to receive information about MTA-STS verification failures of sending mail servers, something which can be either a simple technical issue or an active attack. The disadvantage of the mechanism is that it may be necessary for the mail servers to use a protocol (HTTPS) that is unrelated to the mail sending and therefore requires extra functionality in the mail server implementation. Furthermore, the protocol has its own security mechanisms (HSTS, Report-To, Network Error Logging, Expect-CT, …) which should be supported as an HTTP client in a mail server to provide the highest available security level. Policy violation reporting based on mails requires properly configured DomainKey Identified Mail (DKIM) on the report sending mail server side. If DKIM verification fails on the side of the report recipient mail server, the report can be ignored, according to the RFC. However, the RFC does not discuss the fact that the Sender Policy Framework can also be checked, as the report is actually sent by a mail server in the name of a given domain. With these checks, the reporter can be authenticated in a slightly weak way though, as an attacker can buy a domain and can apply proper DKIM and SPF settings and to send misinforming reports. However, with that activity there is a risk that the mail server is going to be added to a blacklist. The other option, sending the report via HTTPS, provides a much better opportunity to the attacker to send misleading reports, as it is much harder for the recipient to authenticate the sender of the report. Maybe this is the reason why the vast majority of the domain owners have chosen reporting MTA-STS violation in mail, not via HTTPS. This means there is a risk that spoof reports will be sent by an attacker to generate false positive alerts, thus undermining the trust in these kinds of reports, which could indicate man-in-the-middle attacks against our mailing system.
Another problem of the reporting is the fact that it is completely voluntary. Publishing a TLSRPT policy with an appropriate aggregate report URI (
rua) value does not necessarily mean that you will actually receive any reports, as the recipient may or may not support the TLSRPT mechanism. Even if the recipient supports the mechanism, it is far from sure that the recipient supports report sending and also intends to send reports, as configuration and a certain amount of resources are required to do so.
The relation between the previously discussed TLS-RPT and the MTA-STS is the same as the relations between DMARC and the Sender Policy Framework (SPF) or DomainKey Identified Mail (DKIM). When the verification of SPF or DKIM fails, the recipient mail server can send a failure report to the domain owner defined in the DMARC TXT record. Compared to TLS-RPT it can be noted that there are more sophisticated settings here. Not only a single type of reporting endpoint can be set, but one for aggregate feedback (
rua) and another for message-specific failure information (
ruf). Report interval (
ri) can also be set to indicate a request to mail receivers to generate aggregate reports separated by no more than the requested amount of time, and last but not least a policy is published as part of DMARC applies to the domain and can also apply to subdomains. The domain owner may request mail receivers to take no specific action (
none) to treat mail that fails the DMARC check suspicious (
quarantine) and place them into a spam folder, for instance, or to reject (
reject) email that fails the DMARC check.
_dmarc.example.com. IN TXT "v=DMARC1; p=reject; rua=mailto:firstname.lastname@example.org; ruf=mailto:email@example.com"
The ratio of the required actions dramatically shows how heavily the business depends on the email systems and how much more important the business continuity is than the security. Using a security first mindset any mail that fails the DMARC check would be treated as suspicious and should be quarantined or rejected. Except for the top 1,000 domain owners, only the minority are brave enough to ask the mail receivers to quarantine or reject. In other cases, the majority asks the mail receivers to take no action which questions the usage of DMARC has any sense.
Authenticating the issuer of the report is as hard as it is in the case of SMTP TLS Reporting (TLS-RPT), so there is a risk that the reports will be spoofed. Despite the TLS-RPT, DMARC supports only delivering reports by mail, so authentication of the sender is possible, but only in a limited manner.
Incidentally, a situation is conceivable when there are two servers which support DMARC and intend to send reports when experiencing an issue, but the DMARC fails mutually during the check of the reporting mail due to a DMARC configuration or DMARC check implementation issue. In that case, the DMARC report send is triggered mutually by the DMARC report of the other server, which could potentially cause an endless loop. There are also limits to DMARC specification, like percentage (
pct) of messages to which the DMARC policy is to be applied, or the earlier mentioned report interval (
ri), but these options are designed to help the introduction of DMARC, not to avoid attacks.
The mechanisms discussed above are about machine-to-machine (M2M) communication. There are no – or no well-declared – user-facing consequences of success and failure. BIMI seeks to fill this gap, and makes the result of the earlier mechanisms visible to the user. BIMI enables the display of brand-controlled logos within supporting email clients (MUA) only if the email is well authenticated. This essentially means that the sender domain must publish DMARC for the domain and the subdomains, where the policy must be set to either quarantine or reject and where the percentage subdomain policy cannot be set to anything less than 100. For the brand’s logo to be displayed in the email client, the email must pass DMARC authentication and BIMI validation checks, ensuring that the organization’s domain has not been impersonated and the brand indicator is valid.
default._bimi.example.com. IN TXT "v=BIMI1; l=https://example/bimi/logo.svg"
The configuration is relatively simple compared to the previously described mechanisms, but it must be noted that there could also be a relatively simple attack against it. Once the logo is published, a malicious actor could register a lookalike domain, configure DKIM, DMARC and copy the indicator to mimic the attackable domain and publish it as if it were its own. In that case, the same logo would appear in the email client of the recipient as the email coming from either the attacked or the attacker domain, so for the recipient it is just as hard to distinguish the two domains as it was before. The situation is much worse when the attacked domain does not publish BIMI, just the attacker one, so the latter may seem more trustworthy than the former. This weakness is targeted by the Verified Mark Certificates (VMC), which provide digitally signed evidence that the organization is allowed to use the brand indicator, like the extended validation where not only the domain but the organization name and other descriptive data are certified. The mechanism is based on X.509, so it carries all the complexity and difficulties of it, including validation, revocations checking and certificate transparency. Verified Mark Certificates can only be purchased by an organization if their logo is trademarked, so it is feared that the VMC part of BIMI takes us back to pre-Let’s Encrypt times, when there was no chance of signing a certificate with a widely accepted certificate authority for free, meaning it cannot reach high prevalence, just like extended validation.
DNSSEC is a suite of extension specifications for securing data exchanged using the DNS protocol in untrusted networks, just like the internet. DNSSEC ensures the authenticity and integrity of data, but not confidentiality. As the information stored in DNS records related to the previously discussed mechanisms (SPF, DMARC, DKIM…) are public, confidentiality is not required, but authenticity and integrity are, meaning that we need to be sure that the information is not compromised by an attacker. DNS over TLS (DoT) and DNS over HTTPS also provide authenticity and integrity in addition to confidentiality, but DNSSEC has a significant advantage over them, namely it can ensure authenticity and integrity during the potentially recursive name resolution procedure. DoT and DoH can provide confidentiality, integrity and authenticity between the DNS client and the DNS resolver, but it is not possible to ensure that all DNS servers use DoT/DoH when serving the request and the client has no information about whether they used DoT/DoH or not. It must be noted that these two protocols were never intended to achieve that goal, but instead to provide confidentiality and also privacy to the end-user.
In the case of DNSSEC, integrity and authenticity are ensured by digital signatures. Answers from DNSSEC-protected zones are digitally signed by a key whose public part can be found in the DNSKEY record of the given domain. Using this public key, a DNS client can check the integrity and authenticity of the answer. Just like any other cases where integrity and authenticity are ensured by digital signatures, the key issue is the chain of trust and also key types, sizes, and digest algorithms used in the chain. Public keys found in DNSKEY records are signed by the DNSKEY of the upper level domain up to the top level (TLD) and the root domain. The parameters of these keys determine the security level. As DNSSEC is a relatively old protocol of the internet, and the earlier versions did not support strong encryption algorithms and large key sizes, so backward compatibility leads to the fact that small key sizes are still in use, however, large ones are also supported.
Only an insignificant minority of the top-level domains currently use the 1024-bit RSA key as the strongest key. Mostly, 2048-bit RSA keys are used, though it must be noted that RSA keys with lesser key sizes are also used. The prevalence of elliptic-curve-based public keys (ECDSA), despite all their advantages, is particularly low among the top-level domains. Only 256-bit keys are used, where the RSA equivalent key size is 3072, neither greater key sizes, nor Edwards-curve Digital Signature Algorithm (EdDSA) curves are used, though the latter is now recommended. Digest algorithms do not follow the same pattern as key types and sizes did: backward compatibility has no significant effect. Only an insignificant minority offer the weekend SHA-1 algorithm. The majority of the top-level domains offer strong digest algorithms.
It should be noted that the prevalence of DNSSEC is unfortunately low despite NIST declaring that organizations should deploy DNSSEC for all DNS name servers and validate DNSSEC responses on all systems that receive emails to provide authentication and integrity protection to the DNS resource records discussed above.
DANE is a protocol that allows binding X.509 certificates to domain names using the previously discussed DNSSEC. The most important result of binding domain names and X.509 certificates is that it makes any certificate authority (CA) needless, as it can declare an X.509 certificate or public key used on a specific service (for instance an email server) of the domain. In this case, an X.509 public key does not need to be certified by a third party (CA) that the X.509 public-key relates to a specific domain. As part of the issuance of a domain validated certificate, the domain owner must prove its authority to the certification, mostly by creating a DNS record with a specified value. However, in the case of DANE, a TLSA record has already been created to store the public key or its hash, so the authority is proven by matching the public key or X.509 certificate in the DNS record and provided by the service. The mechanism also makes certificate common name and subject alternative name (SAN) matching needless, as the certificate and the domain name are bound by a DNS record related definitely to the domain name. This means there is no need to certify the binding by a certificate authority as it is cryptographically proven by DNSSEC. Certificate revocation check, which is the Achilles heel of the complete Public Key Infrastructure (PKI), also becomes needless, as the validity of a certificate has no sense in this environment: there is no certification by a third party that could expire, and the domain owner simply stops publishing a key if it is suspected to be compromised. The situation is the same when the domain owner just wants to regularly change keys to decrease their validity period, as it would be strongly recommendable, and now blocked by the certificate authorities despite the several weaknesses of revocation check mechanisms.
_25._tcp.mail.example.com. IN TLSA 2 0 1 ( E8B54E0B4BAA815B06D3462D65FBC7C0 CF556ECCF9F5303EBFBB77D022F834C0 )
The previously emphasized way of working does not mean that DANE cannot work together with PKI. TLSA records have four data fields: certificate usage (2), selector (0), matching type (1), and certificate association data respectively. Certificate usage specifies that certificate association data contains a leaf certificate (end entity or EE) or a CA (trusted anchor or TA) and also specifies whether the certificate given by the server should be issued by a CA trusted by the application doing the verification (PKIX) or not (DANE). Certificate usage can take the following four values, where the certificate provided by the server during the TLS handshake must:
Selector specifies whether the full certificate (0) or the subject public key info are part of the certificate sent by the server should be matched against the association data, while matching type specifies how the certificate association data is presented. The entire selected data can be presented either in the certificate association data (0) or by a hash (1/2).
DANE would mean a giant leap in the direction of a decentralized internet that preserves the support of the traditional PKI by providing some kind of digital self-determination for the domain owner. At the same time, it would also solve the problem that clients now trust in a predefined group of root CAs, where the members of the group depend on the vendor of the client application, independently from the fact that the domain owner trusts in only one CA that actually issued the certificate used by a server. Supporting DANE constraints could be applied to a number of trusted CAs – including the number zero, meaning that a domain owner could decide to skip CAs entirely.
Despite all the advantages of DANE, there is one unavoidable disadvantage, namely the lack of client application support, something which makes DANE a theoretical solution only to several serious practical issues.
Given the serious shortcoming of the old fashioned mail transfer protocol, it is strongly recommended to arm email systems with as many additional security mechanisms as possible, according to NIST which recommends organizations to deploy SPF, MTA-STS, DANE, and DNSSEC in order to avoid receiving mail from unauthenticated sources on an unreliable channel. There are, however, overlaps between the discussed mechanisms, all of them are needed to ensure the complete confidentiality, integrity, and authenticity of the received mails and also to be aware of whether senders are experiencing any suspicious behavior. The discussed mechanisms have no significant introduction or maintenance costs, especially given their undoubted benefit. Organizations should declare policies to help each other make informed decisions. These policies are about what servers are authoritative to send emails by the name of their domains (SPF) and whether they want to use encryption (MTA-STS, DANE) to ensure confidentiality when receiving mails, what the evidence is that a received mail has factually been sent by sender (DKIM), and how parties can inform each other when they experience errors or suspicious behaviors (DMARC, TLS-RPT). They also have to prove the integrity and authenticity of these policies (DNSSEC). This means that the “never trust, always verify” principle of the Zero Trust Security Model is essential to filter out any suspicious senders and their deceptive content and malicious attachments, ideally before such content can cause a business email compromise.
Ez a blogposzt a Creative Commons Attribution-ShareAlike 4.0 International (CC-BY-SA 4.0) License feltételei mellett licencelődik.
Az ipar a zsarolóvírusok célkeresztjében
A zsarolóvírusok nemcsak a magánszemélyek, hanem a vállalatok számára is hatalmas fenyegetést jelentenek. Közülük is kiemelkednek azonban azok az ipari cégek, amelyeknél egy informatikai leállás akár a gyártást is ellehetetlenítheti.
The internet is a global village, not a metropolis
Think the internet is large enough to hide from criminals in the hope you won’t be the next victim of a cyber attack? Sadly, this is no longer the case. The internet is a global village, where everyone is your neighbor, and anyone can detect your mistakes and vulnerabilities.