Events – Securelist https://securelist.com Tue, 28 Mar 2023 16:43:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://securelist.com/wp-content/themes/securelist2020/assets/images/content/site-icon.png Events – Securelist https://securelist.com 32 32 Black Hat USA 2022 and DEF CON 30 https://securelist.com/black-hat-usa-2022-and-def-con-30/107184/ https://securelist.com/black-hat-usa-2022-and-def-con-30/107184/#respond Wed, 17 Aug 2022 13:00:25 +0000 https://kasperskycontenthub.com/securelist/?p=107184

Black Hat 2022 USA Briefings wrapped up this past week, along with its sister conference DEF CON 30. The DEF CON theme was a “Hacker Homecoming”, and it really was a fun one. Coming back from the COVID hiatus, the conferences were enthusiastically full compared to the 2021 ghost town. Many of the talks were great, fresh content.

With the parties and the CTF fun humming along, excellent briefings included Kim Zetter’s insights on “Pre-Stuxnet, Post-Stuxnet: Everything Has Changed, Nothing Has Changed”. She is the first journalist to keynote Black Hat, and she intended to speak on the changes that Stuxnet brought, and the stuff that gets ignored until it’s too late. She specifically included discussion of elections infrastructure security, and cybernorm challenges in light of recent activity in Eastern Europe and the Middle East.

Kim listed the major changes that came about following Stuxnet:

  • A reversed trend in trickle down techniques and tools, now from APT to the crimey underground
  • Launched a cyber arms race and militarization of cyberspace
  • Politicization of security research and defense
  • Introduction of serious ICS vulnerabilities impacting critical infrastructure

Zetter highlighted the legitimate election security discussion, and said that it’s important to talk about, in spite of the consistent misappropriation and misinformation coming from high volume conspiracy groups. She spoke about various voting count incidents and the lack of accountability in very specific incidents. Of course, these actual events have been and will be spun up into misinformation content, which is unfortunate, but the legitimate discussion must be held. Interestingly, OAN members were later allegedly kicked out of DEF CON, specifically from the Voting Village.

Zetter noted from a 1997 “CRITICAL FOUNDATIONS PROTECTING AMERICA’S INFRASTRUCTURES” Report of the President’s Commission on Critical Infrastructure Protection, “The capability to do harm—particularly through information networks—is real; it is growing at an alarming rate; and we have little defense against it.” Keep in mind it was authored 25 years ago.

Fast forward to 2022 and Kim makes mention of the technical debt leading to the Colonial Pipeline ransomware fiasco that led to an overwhelming of the east coast fuel supply chain. She discussed how quickly Colonial paid the ransom, their lack of security preparation, and preceding audits of their “atrocious” security practices, “an eight grader could have hacked that system”. Not long after, CISA re-released yet another set of security guidelines for pipeline owner/operators. Unfortunately, Kim didn’t provide any mention of accountability for the decision-makers behind the Colonial fiasco.

Her talk turned to the challenges to “cyber-norms” that the Ukraine-related ITArmy presents and the recent incidents in Iran with 4,000 gas pumps being disabled and a severe equipment malfunction at a steel plant, suggesting these events also will likely leave an impact on the future stability of cyberspace.

Another favorite talk came from an individual still tied up in Taiwan with Visa issues. Orange Tsai enthusiastically gave a remote, well structured, insightful explanation of his research on Microsoft’s Hash Tables and attacking them from IIS with “Let’s Dance in the Cache – Destabilizing Hash Table on Microsoft IIS”. The codebase he addressed is decade+ old, and he danced all over web services and their authentication. Hopefully he will be in-person for future work.

Amongst all the village dazzle, DEF CON included a social engineering village, and talks included policy discussion, panels on getting a start in social engineering, and more. Their live action vishing challenge is a thrill. I am catching up on one of the recommended reading titles from a panel “How to Make People Like You in 90 Seconds or Less”.

It’s great to see people slowly returning to fully masked, in-person venues. See you next year!

]]>
https://securelist.com/black-hat-usa-2022-and-def-con-30/107184/feed/ 0 full large medium thumbnail
How we took part in MLSEC and (almost) won https://securelist.com/how-we-took-part-in-mlsec-and-almost-won/104699/ https://securelist.com/how-we-took-part-in-mlsec-and-almost-won/104699/#respond Thu, 28 Oct 2021 14:20:47 +0000 https://kasperskycontenthub.com/securelist/?p=104699

This summer Kaspersky experts took part in the Machine Learning Security Evasion Competition (MLSEC) — a series of trials testing contestants’ ability to create and attack machine learning models. The event is comprised of two main challenges — one for attackers, and the other for defenders. The attacker challenge was split into two tracks — Anti-Malware Evasion and Anti-Phishing Evasion. Even though in our routine work we tend to deal with ML-based protection technologies, this time we decided to step into the attackers’ shoes and chose the offensive tracks. Our Anti-Malware Evasion team consisted of two data scientists — Alexey Antonov and Alexey Kogtenkov — and security expert Maxim Golovkin. In the Anti-Phishing Evasion team there were two more data scientists — Vladislav Tushkanov and Dmitry Evdokimov.

MLSEC.IO Phishing Track

In the phishing track the task was to modify 10 (synthetic) phishing samples (i.e. HTML pages) to convince seven phishing detection models that they were benign. A model was bypassed if it returned a probability of less than 0.1 for each sample. Yet there was a catch: after the modifications the samples had to look the same as before (or to be exact, render screenshots were to have the same hashes for both the original and modified html file). Models were available via an API, meaning that the setting was black-box. To win we had to fool as many models as possible for each sample using the fewest possible number of API queries.

What we did

At first, we thought of attempting a classic model replication attack (see AML.T0005 of MITRE ATLAS), but after we entered the competition we observed that the leader already got the highest possible score using just 343 API calls (with one full upload costing 70 calls). This sort of attack would involve submitting a large number of phishing/non-phishing pages to retrieve model outputs and training “shadow” models of our own, which we would then attempt to bypass offline. However, we did not have the API query budget for that, which probably made the setting a bit more realistic. We, therefore, had to rely upon our domain expertise and pure chance. We started off by submitting several clean (benign) web pages — such as Wikipedia’s — to check what probabilities the models would return. But, while the first three models were quite happy with them, the remaining four consistently generated probabilities way above 0.1. From the API response speed we observed that the models might be ingesting raw HTML without any kind of browser rendering.

Littering the page with invisible text

We composed a simple script to check render hashes and tried our first solution — adding a large hidden chunk of Wikipedia text to the phishing sample, fixing invalid HTML and applying minification. This worked, but only for the first three models. We wondered whether some of the models could be overfitting the provided samples. As we went along, we learned that the last three models often had the same scores, so we only tested our samples on the previous model (likely the most sensitive one) to save API calls.

Obfuscation via byte shift

When working on phishing detection, we, like Rutger Hauer in Blade Runner, had seen things. Phishing payloads hidden in Morse code. Phishing pages consisting almost entirely of screenshots in base64. Actual page text encrypted with ROT13. Drawing on this domain knowledge, we devised our first obfuscation scheme:

  1. Create a small fake “personal blog” page.
  2. Take the actual phishing page, shift all the symbols by n and store as a string.
  3. On page load, shift the symbols back and document.write the result back to the page. However, this turned out to be a bad idea — not only did the shifting process create all kinds of escaping issues, the last three models still sounded alarms on our samples. We added popular header tags, such as <meta>, which somehow led to poorer results on the first three models. It felt like the last three models were unhappy about the large number of HTML tags or, probably, the high-entropy payload string.

Obfuscation via byte integer encoding

We then tried another kind of obfuscation. Instead of shifting, we encoded each byte as an integer number and put the numbers into invisible <img> tags. This fooled the first four models, but not the last three. However, the results were better. We wondered whether or not the last three models reacted strongly to the obfuscator code. Maybe they didn’t like document.write? We submitted an empty page with the obfuscator present and learned that the probabilities were quite low — from 0.074 for the first four models to 0.19 for the remaining three — meaning the phishy JavaScript was not what the models were looking at.

Less tags, more text

We hypothesized that the models somehow took into account character distribution. Considering their probable aversion to HTML tags, we used a big chunk of raw text — an excerpt from the Bible. We added the relevant <meta> to the header and hid chunks of numericized bytes among the verses, like this:

20:18 And he said, Whether [101 49 99 114 117 119 89 79 108 80 78 90 65 83 83 47 56 122 74 74 57 
69 104 108 85 67 105 72 106 108] they be come out for peace, take them alive; 
or whether they [70 50 43 97 107 121 71 68 48 47 104 105 83 86 86 108 107 106 107 48 114 111 49 
114 78 83 49 85 118 75] be come out for war, take them alive.

This worked! Nine pages out of 10 bypassed all seven models, while the 03.html sample was rejected as too large (the maximum size was experimentally determined to be 10 MB). Moreover, all the probabilities were the same:

00 01 02 03 04 05 06
0.02 0.02 0.02 0.02 0.084 0.087 0.087

By that time, we still had about 50 API calls left.

Back to base (64)

This obfuscation scheme was highly inefficient, each byte at least quadrupled. 03.html was a monster of a page, weighing 4 MB due to a few high-res base64 encoded images. We dug into the source code and noticed that some of them were duplicated, so we carved them out. The page slimmed down to 1.7 MB. Alas, to make the obfuscated page less than 10 MB, we were forced to radically increase the numbers-to-text ratio. The last three models sounded alarms, probably because they were suspicious of the unusual character count distribution. However, we learned that if we changed the separator from a space to \n, the sample bypassed them, and the models did at least some kind of processing line by line. In addition to being inefficient, the pages loaded very slowly. So slow in fact that the grading system returned a screenshot equality check failure for 03.html with byte-to-integer obfuscation. We were wary of using base64 at first, as it was commonly used in the original samples, but in the dying hours of the competition, we decided to give it a try and placed base64 chunks among the cloak text:

PGhlYWQ+PG1ldGEgY2hhcnNldD11dGYtOD48bWV0
1:2 And the earth was without form, and void; and darkness was upon
YSBjb250ZW50PSJ3aWR0aD1kZXZpY2Utd2lkdGgs
the face of the deep. And the Spirit of God moved upon the face of the
IGluaXRpYWwtc2NhbGU9MS4wIiBuYW1lPXZpZXdw
waters.
b3J0PjxtZXRhIGNvbnRlbnQ9IlNpdGUgRGVzaWdu

It worked! The page loading time also decreased sharply, and we were able to finally receive the full score for 03.html with the following model outputs:

00 01 02 03 04 05 06
0.017 0.017 0.017 0.017 0.072 0.076 0.076

Thus, we were able to fool the models and took first place in the anti-phishing track.

MLSEC.IO Anti-Malware Track

The anti-malware track consisted of two parts. The defensive part was about creating anti-malware models resistant to adversarial attacks. Note that these models must meet certain quality requirements regarding FPR/TPR on some hidden dataset, so it is impossible to create a model that would simply detect all files. The task was to provide six models for attacking.

The adversarial part was about modifying 50 malicious files in such a way as to bypass as many defense track models as possible (by making them appear benign). At the same time, the modified files were tested by the sandbox contest system. The files were to behave exactly the same way as the original malicious objects.

In the attack competition there was an API allowing us to get model predictions (0 for clean files, 1 for malicious ones). It was also possible to send files for a complete test that included not only a model classification check, but also a sandbox behavior comparison against the originals.

Dataset

The organizers did not post a dataset for the training models (which might have been useful in both the defensive and attacking parts for building and experimenting with models). However, they suggested using the Ember dataset, which has about 200,000 pre-calculated feature vectors for labeled clean and malware files (100,000 for each class), as well as a large number of unlabeled vectors. The dataset offers quite a powerful set of features, including byte distribution histograms in the file and entropy, header signs, file sections, information about the file’s readable strings, and more.

First Experiments

The contest topic strongly overlapped with our recent article about attacks on ML models. So we decided to apply the techniques discussed in the article. First, we trained a neural network model on the Ember dataset using its labeled part (assuming that the defenders would do the same). Then, for each target malicious file, we began to iteratively change certain features (specifically, the byte histograms and the string features) using gradient steps, thereby decreasing the probability of “bad” label prediction by the model. After several steps, a new set of features was obtained. Next we had to create a modified file that would have such features. The modified file could be constructed either by adding new bytes to the end of the file (increasing the overlay) or by adding one or more sections to the end of the file (before the overlay).

Note that this file modification method significantly reduced the probability of files getting classified as malware — not only for the attacked neural network, but also for other completely different architecture models we trained on the same dataset. So the first results of the local testing experiments were quite impressive.

Yet five out of six contest models continued detecting those modified files, just like the originals. The only “deceived” model, as it turned out later, was simply too bad at detecting malicious files and easily confused by almost any modification of the original. There were two possibilities: either the participating models should use none of the features we changed for the attack at all, or heuristics should be used in these models to neutralize the effect of the changes. For example, the basic heuristic proposed by the organizers was to cut off the file’s last sections: this way the added sections effect would be simply ignored.

What features are important for the contest models?

Our further steps:

  1. We tried to find out what features were important for the classifiers. To do this we trained a boosting model on the proposed dataset. Then we proceeded to measure the importance of individual features for target malicious files using Shapley vectors. The picture below shows the features affecting the classification results the most. The color represents the feature value, and the position on the X axis shows whether this value pushes the file into the “clean” or the “malware” zone.

    Feature importance for file classification

    Feature importance for file classification

    For example, the timestamp feature has a significant impact on the classifier. The smaller its value (e.g., older file), the more the file is considered to be “clean”.

  2. From the highest impact features we selected those that can be changed without breaking the executable file. We assumed that the contestants’ models should act similarly to our boosting model, for they depended on the same features.

    During our model research, we found that the header, import table, directory table features are sometimes more important than the file sections data. So if you take a clean file, remove all sections and replace them with sections from the malware file, three out of six models will still consider it “clean”. We also found that one of the models used heuristics to cut off the last sections. If malware sections were added to the end of a clean file, the model’s verdict would be “clean”, but if inserted before the clean ones, the verdict would change to “malware”. Finally, we found the features helped to reliably bypass the four models mentioned. And for the other two, we found no consistent method to generate adversarial files (even not-working ones).

    To completely change the section features with only minor file modification, we discovered this interesting shortcut. To calculate the feature vector, the creators of the Ember dataset used the FeatureHasher class from the sklearn.feature_extraction library. This class turns sequences of pairs (feature name, feature value) into an array of fixed length. First, it defines the position and sign (the sign will be important further on) by the hash of the feature name. Then FeatureHasher adds or subtracts (according to the sign) the corresponding feature value from the array position. The name of the section is used as the key for such hashing, and the value is determined by its size and entropy. Thus, for any given section you can add to the file another one with a specially constructed name, so the features of the new one will fall into the same cell of the hash table — but with opposite signs. Taking this idea further, you could zero out all the values in the hash table or construct any other values by adding a section of desired name and size to the end.

About the quality of the attacked models

We tried feeding various legitimate applications to the contestants’ models, such as Notepad, Paint, 7Zip, Putty, etc., including many Windows system executables. To our surprise, the models would very often recognize them as malicious. For example, the SECRET model, which took first place in the defensive part of the contest, detected most of the clean files we fed to it. Other models, too, kept detecting clean applications.

It might be incorrectly assumed that to win the competition the best protection strategy would be to recognize all files as malicious, except those that are “clean” in the training dataset. In reality such models don’t work. We think this is because the hidden test dataset is not representative enough to assess the quality of the good models. We further believe that the same Ember dataset was likely used by both the contestants and the organizers, so the models overfitted it. In the next iterations of the contest we would suggest expanding the test dataset for the defensive part of the contest with more clean files.

Final algorithm

As a result of our analysis, the following final algorithm was built for how to modify the target malicious files:

  1. Take a clean file not detected by any competing model. In this case, we selected the system file setupcl.exe (remember that non-system clean files were often detected by some models).
  2. Partially replace the malicious file’s header to make it look like that of a clean file (but the file itself should remain functional at the same time).
  3. Using the described section hash trick, zero out the “malware” section features, then add sections from the relevant clean file to the end of the file to add those “clean” features.
  4. Make changes to the directory table, so it looks more like a clean file’s table. This operation is the riskiest one, since the directory table contains the addresses and virtual sizes, the modification of which can make the file inoperable.
  5. Replace the static imports with dynamic ones (as a result, the import table turns empty, making it possible to fool models).

After these modifications (without checking the file behavior in the sandbox) we already had ~180 competition points — enough for second place. However, as you will learn later, we did not manage to modify all the files correctly.

Results

Some modification operations are quite risky in terms of maintaining correct file behavior (especially those with headers, directory tables and imports). Unfortunately, there were technical issues on the contest testing system side, so we had to test the modified files locally. Our test system had some differences, as a result, some of the modified files failed to pass the contest sandbox. As a result, we scored little and took only 6th place overall.

Conclusion

As anti-phishing experts, we were able to deduce, at least in general, how the models worked by observing their outputs and create an obfuscation scheme to fool them. This shows how hard the task of detecting phishing pages actually is, and why real-life production systems do not rely on HTML code alone to block them.

For us as malware experts, it was interesting to dive into some details of the structure of PE files and come up with our own ways to deceive anti-malware models. This experience will help us to improve our own models, making them less vulnerable to adversarial attacks. Also, it is worth mentioning that despite the number of sophisticated academic ML-adversarial techniques nowadays, the simple heuristic approach of modifying malicious objects was the winning tactic in the contest. We tried some of the adversarial ML techniques, but straightforward attacks requiring no knowledge of the model architecture or training dataset were still effective in many cases.

Overall, it was an exciting competition, and we want to thank the organizers for the opportunity to participate and hope to see MLSEC develop further, both technically and ideologically.

]]>
https://securelist.com/how-we-took-part-in-mlsec-and-almost-won/104699/feed/ 0 full large medium thumbnail
Lyceum group reborn https://securelist.com/lyceum-group-reborn/104586/ https://securelist.com/lyceum-group-reborn/104586/#respond Mon, 18 Oct 2021 11:00:08 +0000 https://kasperskycontenthub.com/securelist/?p=104586

This year, we had the honor to be selected for the thirty-first edition of the Virus Bulletin conference. During the live program, we presented our research into the Lyceum group (also known as Hexane), which was first exposed by Secureworks in 2019. In 2021, we have been able to identify a new cluster of the group’s activity, focused on two entities in Tunisia.

According to older public accounts of the group’s activity, Lyceum conducted targeted operations against organizations in the energy and telecommunications sectors across the Middle East, during which the threat actor used various PowerShell scripts and a .NET-based remote administration tool referred to as “DanBot”. The latter supported communication with a C&C server via custom-designed protocols over DNS or HTTP.

Our investigation into Lyceum has shown that the group has evolved its arsenal over the years and shifted its usage from the previously documented .NET malware to new versions, written in C++. We clustered those new pieces of malware under two different variants, which we dubbed “James” and “Kevin”, after recurring names that appeared in the PDB paths of the underlying samples.

As in the older DanBot instances, both variants supported similar custom C&C protocols tunneled over DNS or HTTP. That said, we also identified an unusual variant that did not contain any mechanism for network communication. We assume that it was used as a means to proxy traffic between two internal network clusters. Our paper elaborates on the C&C protocol mechanics, the timeline of using the variants and the differences between them.

In addition to the revealed implants, our analysis allowed us to get a glance into the actor’s modus operandi. Thus, we observed some of the commands the attackers used within the compromised environments, as well as the actions taken to steal user credentials. These included the use of a PowerShell script designed to steal credentials stored in browsers and a custom keylogger deployed on some of the targeted machines.

Finally, we noticed certain similarities between Lyceum and the infamous DNSpionage group, which, in turn, was associated with the OilRig cluster of activity. Besides similar geographical target choices, and the use of DNS or fake websites to tunnel C&C data as a TTP, we were able to trace significant similarities between lure documents delivered by Lyceum in the past and those used by DNSpionage. These were made evident through a common code structure and choices of variable names.

Our presentation from the conference, detailing some of the aspects described above, can be viewed here:

An even more detailed outline with technical specifics can be found in the paper that accompanied the presentation, now available on the Virus Bulletin website.

]]>
https://securelist.com/lyceum-group-reborn/104586/feed/ 0 full large medium thumbnail
Wake me up till SAS summit ends https://securelist.com/sas-at-home-2021/104303/ https://securelist.com/sas-at-home-2021/104303/#respond Thu, 23 Sep 2021 08:00:58 +0000 https://kasperskycontenthub.com/securelist/?p=104303

What do cyberthreats, Kubernetes and donuts have in common – except that all three end in “ts”, that is? All these topics will be mentioned during the new SAS@Home online conference, scheduled for September 28th-29th, 2021. To be more specific, there will be a workshop titled, “Prevent & Detect Security Threats in the Kubernetes Era” and a presentation titled, “Time to Make the Donuts”, the latter presumably not about actual doughnuts. As for cyberthreats, this topic is always on the table because it is the phenomenon we confront every day and the cause that unites us researchers.

What else can we offer during the two eventful days?

  • Kaspersky experts Igor Kuznetsov and Georgy Kucherin will tell a story of how they investigated top-class commercial spyware and dissected an infamous toolset.
  • Rintaro Koike, Shogo Hayashi and Ryuichi Tanabe of NTT Security, Japan will present a research paper, titled, “Operation Software Concepts: A Beautiful Envelope for Wrapping Weapon”.
  • Ivan Kwiatkowski and Pierre Delcher of Kaspersky GReAT will describe possible links between the Tomiris malware and the supply-chain attacks on Solarwind.
  • PWC’s John Southworth will teach the audience to dance with APT41.
  • More details about the GhostEmperor APT, tools to catch zero-click zero-days, supply-chain attacks in Farsi and, of course, our usual workshops.

Last but not least, we are preparing worthy challenges for everyone interested in malware analysis and threat hunting. During SAS@Home, the 9th Edition of our, by now well-established, CTF/Hackgame, players will compete in five categories, trying to solve challenges presented by CTF hosts David Jacoby and Marco Preuss. This year, we will have the following categories: kNOW yOUR eNEMY, dEBUGGERS pARADISE, oLDsKOOL, cODEbREAKER and THE WiLD WEB, each with five amazing levels. You do not need to be a reversing wizard, guru programmer or ninja analyst – there is something for everyone to tackle and solve.

At the end, the top five players will win a seat at Kaspersky xTraining, worth $1,400! However, our game is not just about prizes, but having fun and learning something new. Always remember: you cannot loose anything, but you can win it all.

]]>
https://securelist.com/sas-at-home-2021/104303/feed/ 0 full large medium thumbnail
Targeted Malware Reverse Engineering Workshop follow-up. Part 2 https://securelist.com/targeted-malware-reverse-engineering-workshop-follow-up-part-2/101945/ https://securelist.com/targeted-malware-reverse-engineering-workshop-follow-up-part-2/101945/#respond Wed, 21 Apr 2021 10:00:47 +0000 https://kasperskycontenthub.com/securelist/?p=101945

If you have read our previous blogpost “Targeted Malware Reverse Engineering Workshop follow-up. Part 1“, you probably know about the webinar we conducted on April 8, 2021, with Kaspersky GReAT’s Ivan Kwiatkowski and Denis Legezo, to share best practices in reverse engineering and demonstrate real-time analysis of recent targeted malware samples. The experts also had a fireside chat with Igor Skochinsky of Hex-Rays and introduced the Targeted Malware Reverse Engineering online self-study course.

The webinar audience having been so active – it was a very pleasant surprise, thanks again! – not only were we unable to address all the incoming questions online, we didn’t even manage to pack the rest of them in one blogpost. So here comes the second part of the webinar follow-up.

  1. How common are opaque predicates in legitimate software? Can these predicates be leveraged as detection signatures?
    Ivan: It is difficult to provide an answer encompassing all legitimate software. As a general rule, obfuscation or evasion techniques can provide a relevant weak signal  potentially indicating malicious behavior, but should not be used for detection.
    Denis: We mostly deal with malicious, not legit code, but I would not expect such tricks there. What for — protection? I would not expect opaque predicates even from third-party protectors.
  2. Do you often come across binary obfuscation methods like nanomites, control flow flattening or VM in malwares?
    Ivan: Such techniques are extremely rare, possibly because attackers know that the presence of such protections will raise suspicion.
    Denis: We met several flattening cases lately. I could also name a couple of cases of custom internal VM usage in malware. So, not often, but they do exist.
  3. When it comes to packed executables, are automated unpackers usually good enough (like using dynamic instrumentation to detect tail jump and so forth) or is it more about manual work?
    Ivan: It turns out that packed executables are not as widespread as you would think. They turn up so rarely that I always default to manual work.
    Denis: We mostly deal with targeted malware, and packing executables are not common in this world, I agree.
  4. Do we also see any “exotic” commercial packers like vmprotect?
    Ivan: We don’t, however, if this is of interest to you, I strongly recommend you to watch Vitaly Kamluk’s presentation on the subject.
    Denis: Not in this training, but again, I would not say such tools are too popular in the world of targeted malware. Mostly due to being detected by security products, I suppose.
  5. What are the most creative anti-reversing tricks from malware creators you have seen so far?
    Ivan: I would name the LuckyMouse APT which deploys stripped down malware samples containing none of its configuration anymore, once saved somewhere on the victim’s machine. Generally speaking, they’re very good at making sure that files obtained by defenders are incomplete.
    Denis: The best anti-reversing trick I have seen is a seasoned software design pro with brain-damaging multi-module development style and 30 years of experience on the other side of the court. The only thing you want to do after the encounter is to yell at him/her, your disassembler, your PC, and yourself. But when you are done at last — well, this is the reason why we do it.

Questions on the Targeted Malware Reverse Engineering course syllabus

You can find the full syllabus here.

  1. Is the training focused on static reverse engineering or do you use dynamic analysis (e.g. debug/emulation) as well? Is the virtual lab analysis limited to static one?
    Ivan: We occasionally use debugging, and debuggers are available in the VM. Most of the work, however, takes place in IDA Pro.
    Denis: Ah, our deep belief in static analysis has affected the training for sure. But we do debugging as well, it is true. For example, in the LuckyMouse track.
  2. Will the analysis exercises deal only with the “final” malicious payloads/files or with analyzing the entire infection chains (e.g. downloader -> dropper/injector -> shellcode)?
    Ivan: It is closer to the other way around. When we have no time to show everything, we focus on the most complex parts of the infection chain (the beginning), tackle all the problems, and leave the easy part (looking at the unobfuscated final stage) as an exercise for the audience.
  3. You have mentioned that a lot of course time will be spent discussing deobfuscation mechanisms. Will there also be a chapter/section dealing on bypassing anti-reversing mechanisms?
    Ivan: The course is organized around the specific real malware cases. There is no theory segment on obfuscation. However, we show many samples that use different techniques and demonstrate how to approach each one of them.
  4. Does the course cover the C2 protocol traffic analysis?
    Ivan: To some extent, yes. One of the tracks is entirely dedicated to analyzing a network utility, understanding and re-implementing its custom protocol.
    Denis: For example, in the Topinambour track, you deal with simple C2 communication protocol analysis from the reversing point of view: it means means by analyzing the code you come to understand what to expect from the traffic.
  5. Do you cover both IDA Python and IDC during the course?
    Ivan: We only cover IDA Python, but the participants are free to use IDC if they choose to.
  6. Will you teach any countermeasures against this kind of anti-reversing techniques?
    Ivan: It’s our intentional choice to focus on real-life cases; and it is a fact that the vast majority of samples I have worked on involved no such protections. One of the malware specimens shown in the course has Anti-VM detection, which doesn’t bother us as we are just reading the code.
  7. What malicious document formats will be analyzed in the training?
    Ivan: The malicious document studied in the course is the InPage exploit.
    Denis: The InPage file format is based upon Compound Document Format, and we will analyze how the Biodata campaign operators had embedded the shellcode into it.
  8. If you detect such antimalware techniques, will there be a link to your previous Yara training: how to write a good detection rule to find such complex anti obfuscation techniques?
    Ivan: As you will probably see, the course is quite packed as it is! We may make a comment here and there about what could be a good Yara rule, but only in passing. I am, however, certain that the training will help you write better Yara rules.
  9. Shall we also learn to write or automate these anti obfuscation tasks at scale?
    Ivan: Yes, a large part of the course focuses on defeating the various protections that prevent us from seeing the actual payload!

Topics not addressed in the Targeted Malware Reverse Engineering training

  1. The course seems to include various topics on RE. Anything that has been left out? Probably saved for a future update to the course.
    Ivan: There are many things we could not get into. Rust/Go malware, CPU architectures beyond x86 and x64, ARM arch and Mac OS, etc. But we believe we were able to provide a varied yet realistic sample of what we usually encounter.
    Denis: In the third-level reverse engineering course from Kaspersky, you may expect the use of a decryption framework to facilitate such typical reversing tasks.
  2. Does the course address any malware employing unique file formats, thus requiring one to create an IDA loader module? How often do you deal with malware that uses unique file formats? It is something I am looking to learn.
    Ivan: This is a use case not covered by the course, and in fact one that I have yet to encounter.
    Denis: One quite unique _document_ format with the shellcode in it is featured in the course, but it needs no loader module, as you understand. Pity, but your topic seems to be out of the scope of this training. We are planning to create additional reversing screencasts from time to time — let’s think about covering this, too.

Virtual lab

  1. Will it be possible to do the exercises in a personal lab at home to analyze the samples of the course?
    Ivan: Due to legal restrictions in some countries, participants are required to work in the dedicated virtual lab that we provide and the VM cannot be downloaded. The good news is that it contains all the necessary tools, including a full version of IDA Pro.
  2. Can the lab hours be extended if required?
    Ivan: Virtual machines will indeed be suspended after 100 hours of runtime. We can extend the hours on a case-by-case basis, but we expect this should be enough to complete all the tracks of the training.
  3. Do we need to RDP from a VM?
    Ivan: The virtual environment is accessed directly from the web browser.
  4. Are the VM’s stealthy for the malware, or can they be detected through redpill/no-pill techniques?
    Ivan: The VMs provided in the training make no attempt at concealing what they are. Most of the malware provided does not particularly try to prevent execution in virtualized environments, and in any case the training is focused on static analysis with IDA Pro.
  5. If we write IDA scripts, can we extract them to our home environment at the end?
    Ivan: Sadly, this will not be possible. But the scripts you write should remain relatively modest in size, and will probably not be generic enough to allow future use anyway.

Prerequisites

You can check information on prerequisites here.

  • Do you have any good recommendations on how to prepare for the training? Any prerequisites for this course?
    Ivan: I would advise to check out the demo version of the training. It should give you an idea of whether you meet the prerequisites, and we also provide a number of third-party resources in the introduction in case you need a bit of preparation.
  • Is knowledge of cryptographic algorithms also required? Or shall we learn how to detect them in the binaries?
    Ivan: We touch on that subject lightly. In most cases, figuring out which cryptographic algorithm is used is straightforward. If not, some help will be provided during the solution segments.
  • Knowledge of which languages is required?
    Ivan: Python scripting is required at some point. Other than that, familiarity with compiled languages, such as C or C++, is recommended.

Support & feedback

  • How much support or guidance will be available if I get stuck on an exercise?
    Ivan: We will collect your requests through helpdesk. Also a monthly call with the trainers is scheduled to answer your questions about the course. Otherwise, we are generally available on Twitter: @JusticeRage and @legezo.

Exam/certification

  • Does the Targeted Malware Reverse Engineering training provide for some kind of exam/cert at the end?
    Ivan: There is no exam as such, although each track contains challenging knowledge checks and quizzes to check your progress. Certification will be awarded to all participants who complete all the tracks of the course.

Price

  • How much will this course cost?
    Ivan: $1,400 VAT included.
  • Future plans/Future courses

    • What is the difference between the Targeted Malware Reverse Engineering training and the upcoming third-level Advanced Malware Analysis training?
      Ivan: This is an intermediate-level course, while the upcoming one will be an advanced expert-level course.
    ]]> https://securelist.com/targeted-malware-reverse-engineering-workshop-follow-up-part-2/101945/feed/ 0 full large medium thumbnail Targeted Malware Reverse Engineering Workshop follow-up. Part 1 https://securelist.com/targeted-malware-reverse-engineering-workshop-follow-up-part-1/101928/ https://securelist.com/targeted-malware-reverse-engineering-workshop-follow-up-part-1/101928/#respond Mon, 19 Apr 2021 11:30:43 +0000 https://kasperskycontenthub.com/securelist/?p=101928

    On April 8, 2021, we conducted a webinar with Ivan Kwiatkowski and Denis Legezo, Senior Security Researchers from our Global Research & Analysis Team (GReAT), who gave live workshops on practical disassembling, decrypting and deobfuscating authentic malware cases, moderated by GReAT’s own Dan Demeter.

    Ivan demonstrated how to strip the obfuscation from the recently discovered Cycldek-related tool, while Denis presented an exercise on reversing the MontysThree’s malware  steganography algorithm. The experts also had a fireside chat with our guest Igor Skochinsky of Hex-Rays.

    On top of that, Ivan and Denis introduced the new Targeted Malware Reverse Engineering online self-study course, into which they have squeezed 10 years of their cybersecurity experience. This intermediate-level training is designed for those seeking confidence and practical experience in malware analysis. It includes in-depth analysis of ten fresh real-life targeted malware cases, like MontysThree, LuckyMouse and Lazarus, hands-on learning with an array of reverse engineering tools, including IDA Pro, Hex-Rays decompiler, Hiew, 010 Editor, and 100 hours of virtual lab practice.

    In case you missed the webinar – or if you attended but want to watch it again – you can find the video here: Targeted Malware Reverse Engineering Workshop (brighttalk.com).

    With so many questions collected during the webinar – thank you all for your active participation! – we lacked the time to answer them all online, we promised we would come up with this blogpost.

    1. How do you decide whether the Cycldek-actors have adopted the DLL side-loading triad technique, or the actors normally using the DLL side-loading triad have adopted the design considerations from Cycldek?
      Ivan: It is precisely because we cannot really differentiate between the two that we have been very careful with the attribution of this specific campaign. The best we can say at the moment is that the threat actor behind it is related to Cycldek.
      Denis: Even in our training there is another track with .dll search order hijacking – LuckyMouse. I really would not recommend anyone to build attribution based on such a technique, because it’s super wide-spread among the Chinese-speaking actors.
    2. Does the script work automatically, or do you have to add information about the specific code you are working with?
      Ivan: The script shown in the webinar was written solely for the specific sample used in the demonstration. I prefer to write small programs addressing very specific issues at first, and only move on to developing generic frameworks when I have to, which is not the case for opaque predicates.
    3. Is the deobfuscation script for the shellcode publicly available?
      Ivan: It is derived from a publicly available script. However, my modifications were not made public; if they were, it would make the training a little too easy, wouldn’t it?
    4. Decryption/deobfuscation seems to be very labor-intensive. Have you guys experimented with symbolic execution in order to automate the process? Have you built a framework that you use against multiple families and (data&code) obfuscation or you build tools on ‘as needed’ basis?
      Ivan: I have always found it quicker to just write quick scripts to solve the problem instead of spending time on diving into symbolic execution. Same goes for generic frameworks, but who knows? Maybe one day I will need one.
      Denis: Decryption/deobfuscation is mostly case-based, I agree, but we also have disassembler plugins to facilitate such tasks. By the way, such a code base and the habits are the reasons that create the threshold to change the disassembler. We have internal framework for asm layer decryption, you will meet him in advanced course, but it’s up to researcher to use it or not.
    5. Any insight into the success rate of this campaign?
      Ivan: We were able to identify about a dozen organizations attacked during this campaign. If you want to know more about our findings, please have a look at our blogpost.
    6. Any hint on the code pattern that helped you connect with the Cycledek campaign?
      Ivan: You can find more about this in our blogpost. Even more details are available through our private reporting service. Generally speaking, we have a tool called KTAE that performs this task, and of course the memory of samples we have worked on in the past.
    7. About the jump instructions that lead to the same spot – how were they injected there? Manually using a binary editor?
      Ivan: The opaque predicates added in the Cycldek shellcode were almost certainly inserted using an automated tool.
    8. I am one of the people using the assembly view. After the noping stage usually I have to suffer the long scrolling. You mentioned there is a way to fix this?”
      Ivan: Check out this script I published on GitHub a couple of months ago.
    9. Can xmm* registers and Pxor be used as code patterns Yara signatures?
      Ivan: This is in fact one of the signatures I wrote for this piece of malware.

    Questions on analysis of the MontysThree’s malware steganography algorithm

    1. Do you think there was a practical reason to use steganography as obfuscation, or the malware developer did it just for fun?
      Denis: In my experience, most steps the malefactors take are on purpose, not for fun. With steganography they are trying to fool the network security systems like IDS/IPS: bitmaps are not too suspicious for them. Let me also add that the campaign operators are human, too, so now and again there will be Easter eggs in their products — for example, take a look at the Topinambour track and the phrases used as decryption keys and beaconing.
    2. What image steganography algorithm have you seen hiding in the wild recently, other than LSB?
      Denis: As far as I know, it is LSB alright — Microcin, MontysThree. I would expect some tools to be creating such images for the operators. But take a look at the function we ended during the short workshop: depending on the decrypted steganography parameters, it could be not just LSB, but the “less significant half a byte” as well.
    3. Are there any recent malware samples incorporating network steganography in their C&C-channels, the way the DoublePulsar backdoor did using SMB back in 2017?
      Denis: I suppose you mean the broken SMB packages. Yes, the last trick of the kind I saw was the rare use of HTTP statuses as C2 commands. You might be surprised to learn how many of them there are in RFCs and how strange some of them are, like “I’m the kettle”.

    Reverse Engineering: how to start a career, working routines, the future of the profession

    1. How does one get into malware reverse engineering? What are the good resources to study? How can one find interesting malware samples?
      Ivan: You can find a solid introduction at https://beginners.re/. Next, check out https://crackmes.one/ which contains many programs designed to be reverse-engineered, so one can finally move on to malware samples. Worry not about finding the “interesting” ones early on; just try to get good at it, document what you do, and you will find yourself in no time being able to access all the data you could wish for.
      Denis: Do you like meditating on the code and trying to understand it? Then I suppose you already have everything you need. I think you should not bother looking for interesting ones in the beginning (if I get your question right) – everything will do. In my experience, the the ones on which you would progress more are written by professional programmers, not malware writers, because they just cannot do away with their habit of structuring the data and code, making it multi-thread safe, etc.
    2. Now an experienced malware reverse engineer, where did you start from? Do you have any solid math/programming background from where you moved on to malware reverse engineering? Or what would be the typical path?
      Ivan: I have a software engineering background, and my math expertise is shaky at best. After having met so many people in this field, I can say confidently that there is no typical path beyond being passionate about the subject.
      Denis: Personally I have a math/programming background, but I couldn’t agree more: it’s more about passion than any scientific education.
    3. If you are reverse engineering malware, do you work as a team?
      Ivan: While several researchers can investigate a campaign together, I usually work on samples alone. The time it takes to wrap up a case may vary between a week and several months, depending on the complexity of the investigation!
      Denis: Reversing itself is not the task that is easy to distribute/parallel. In my experience, you would spend more time organizing the process than benefit from the work of several reversers. Typically, I do this part alone, but research is not limited to binary analysis: the quest, the sharing of previous experiences with the same malware/tools, and so forth — it is a team game.
    4. What do you think about AI? Would it help to automate the reverse engineering work?
      Ivan: I think at the moment it is still a lot more A than I. I keep hearing sales pitches about how it will revolutionize the infosec industry and I do not want to dismiss them outright. I am sure there are a number of tasks, such as malware classification, where AI could be helpful. Let’s see what the future brings!
      Denis: OK, do you use any AI-based code similarity, for example? I do, and you know — my impression so far is we still need meat-based engineers who understand how it works to use it right.
    5. How helpful is static analysis, considering the multiple advanced sandboxing solutions available today?
      Ivan: Sandboxing and static analysis will always serve complementary purposes. Static analysis is fast and does not require running the sample. It is great to quickly gather information about what a program might do or for triage. Dynamic analysis takes longer, yields more details, but gives malware an opportunity to detect the sandboxed environment. Then, at the very end, you do static analysis again, which involves reverse-engineering the program with a disassembler and takes the longest. All have their uses.
      Denis: Sometimes you need static analysis because of the multiple advanced anti-sandboxing tricks out there. You also reveal far more details through static analysis if you want to create better Yara rules or distinguish a specific part of custom code to attribute samples to specific developers. So it is up to you how deep the rabbit hole should be.

    Tips on tools, IDA and other things

    1. Do you contribute to Lumina server? Does Kaspersky have any similar public servers to help us during our analysis?
      Ivan: My understanding is that Lumina is most helpful when used by a critical mass of users. As such, I do not think it would make sense to fragment the community across multiple servers. If you are willing to share metadata about the programs you are working on with third-parties, I would recommend to simply go with an Hex-Rays’ instance.
      Denis: No, I have never contributed to Lumina so far. I don’t think it is going to be too popular for threat intelligence, but let us wait and see — public Yara repositories are there, so maybe code snippets might also meet the community’s needs.
    2. What tools and techniques do you recommend for calculating the code similarity of samples? Is this possible with IDA Pro?
      Ivan: For this, we have developed a commercial solution called KTAE. That’s what we regularly use internally.
      Denis: Personally, I am using our KTAE. As far as I know, the creating of custom FLIRT signatures right in IDA could partially cover this need.
    3. Is there any specific reason why you are using IDA under wine? Does it have anything to do with the type of samples you are analyzing?
      Denis: I used to have Windows IDA licenses and Linux OS historically, so wine is my way of using disassembler. It does not affect your analysis anyway — choose any samples you want under any OS.
    4. What is your favorite IDA Pro plugin and why?
      Ivan: One of the internal plugins developed by Kaspersky. Other than that, I use x64dbgida regularly and have heard great things about Labeless.
      Denis: For sure our internal plugins. And it’s not because of the authorship, they just perfectly meet our needs.
    5. Do you have a plan to create/open an API so we can create our own processor modules for decompilers (like SLEIGH in Ghidra)? The goal being to analyze VM-based obfuscation.
      Igor: Unlikely to happen in the near future but that’s something we’re definitely keeping in our minds.

    If you have any more questions about Ivan’s workshop on the Cycldek-related tool or about the Targeted Malware Reverse Engineering online course, please feel free to drop us a line in the comments box below or contact us on Twitter: @JusticeRage, @legezo and @IgorSkochinsky. We will answer the rest of the questions in our next blogpost – stay tuned!

    ]]>
    https://securelist.com/targeted-malware-reverse-engineering-workshop-follow-up-part-1/101928/feed/ 0 full large medium thumbnail
    SAS@Home is back this fall https://securelist.com/sas-at-home-is-back-this-fall/98833/ https://securelist.com/sas-at-home-is-back-this-fall/98833/#comments Wed, 30 Sep 2020 15:15:02 +0000 https://kasperskycontenthub.com/securelist/?p=98833

    The world during the pandemic prepares many surprises for us. Most of them are certainly unpleasant: health risks, inability to travel or meet old friends. One of these unpleasant surprises awaited us in the early spring, when the organizing team of the beloved SAS conference were forced to announce that the event would be postponed to the fall. Later, another difficult but correct decision was made: to cancel the SAS conference altogether this year.

    At the same time, it was the pandemic that gave us the opportunity to invite an unprecedented number of people to the online version of the conference, which we called SAS@Home: more than 2,000 people participated at its peak. All of them had the opportunity to touch the unique atmosphere of the SAS: to see the coolest IT security experts in the company of their colleagues with whom they have warm and friendly relationships.

    Now, this unique year presents us with a new surprise: the second SAS in one calendar year! Once again, everyone can visit this online event. Our listeners will plunge into the friendly atmosphere of our cozy online conference to listen to new stories from leading experts and threat hunters from around the world, from the comfort of their own couch.

    The speakers are experts at Kaspersky Lab:

    • Denis Legezo will tell a fascinating story about espionage in industrial companies worthy of the James Bond series.
    • Tatyana Shishkova will talk about long-running spyware that has been on the radar of analysts for a while but still continues to change and be of interest.
    • Costin Raiu will take the stage to untangle the issue of location tracking and explain how applications collect our data covertly.
    • Last but not least, Igor Kuznetsov and Mark Lechtik will share their fresh research disclosing something entirely new and unexpected.

    Well-known industry experts from other companies will also join us:

    • Katie Moussouris, CEO & Founder of Luta Security who has been featured in two Forbes lists: The World’s Top 50 Women in Tech and America’s Top 50 Women in Tech, will talk about Vulnerability Disclosure Programs (VDPs) across many government sectors, and what could possibly go wrong with them.
    • John Lambert, the Vice President of the Microsoft Threat Intelligence Center, will talk about “githubification” of InfoSec.
    • Kris McConkey from PwC will present a highly technical demonstration of ways to find victims and C2 servers associated with rare implants from multiple APT actors in situations where it is really hard to obtain any viable samples.
    • In addition, Ohad Zaidenberg, Marc Rogers, Nate Warfield and Patrick Wardle will share their stories.

    Just like during the first SAS@home, the last two days of the conference will be largely devoted to workshops, which will help to pump skills from different areas of digital security:

    • Vitaly Kamluk will teach how to use professional solutions for remote digital forensics.
    • Pavel Cheremushkin will share the secrets of his incredible success in searching for vulnerabilities in his workshop on automated discovery of memory corruption vulnerabilities.
    • SAS@home participants will also have the opportunity to listen to a Virus Total workshop conducted by our friends Vicente Diaz and Juan Infantes Diaz. This workshop will be of interest to any threat hunter who has not yet discovered all the capabilities offered by Virus Total.
    • A good friend of the SAS conference, Joe Fitzpatrick of Securing Hardware, will share his extensive knowledge of IoT security.

    As always, the SAS is preparing a lot of fun activities and gifts for attendees:

    • Easter egg challenge for the most attentive listeners.
    • Mini CTF that will be announced this week. Three winners will get full access to Kaspersky training course for experts, “Hunt APTs with Yara like a GReAT Ninja“, for free.
    • All SAS@home participants will receive a discount code for the course that will be valid for the duration of the conference.

    All these activities, workshops and presentations will take place on October 6 through 8:

    11:00 AM – 2:00 PM Eastern
    8:00 AM – 11:00 AM PST
    4:00 PM – 7:00 PM London
    6:00 PM – 9:00 PM Moscow

    You will find the full SAS@Home agenda here: https://thesascon.com/Online

    All you need to do to join this awesome conference is register here: https://kas.pr/3e7o

    ]]>
    https://securelist.com/sas-at-home-is-back-this-fall/98833/feed/ 2 full large medium thumbnail
    Why master YARA: from routine to extreme threat hunting cases. Follow-up https://securelist.com/why-master-yara/98600/ https://securelist.com/why-master-yara/98600/#respond Tue, 29 Sep 2020 14:00:47 +0000 https://kasperskycontenthub.com/securelist/?p=98600

    On 3rd of September, we were hosting our “Experts Talk. Why master YARA: from routine to extreme threat hunting cases“, in which several experts from our Global Research and Analysis Team and invited speakers shared their best practices on YARA usage. At the same time, we also presented our new online training covering some ninja secrets of using YARA to hunt for targeted attacks and APTs.

    Here is a brief summary of the agenda from that webinar:

    • Tips and insights on efficient threat hunting with YARA
    • A detailed demo of our renowned training
    • A threat hunting panel discussion with a lot of real-life yara-rules examples

    Due to timing restrictions we were not able to answer all the questions, therefore we’re trying to answer them below. Thanks to everyone who participated and we appreciate all the feedback and ideas!

    Questions about usage of YARA rules

    1. How practical (and what is the ROI), in your opinion, is it to develop in-house (in-company/custom) YARA rules (e.g. for e-mail / web-proxy filtering system), for mid-size and mid-mature (in security aspects) company, when there are already market-popular e-mail filtering/anti-virus solutions in use (with BIG security departments working on the same topic)?
    2. In the case of mid-size companies, they can benefit a lot from three things connected to YARA, because YARA gives you some flexibility to tailor security for your environment.
      First is the usage of YARA during incident response. Even if you don’t have an EDR Endpoint Detection and Response) solution, you can easily roll-out YARA and collect results through the network using PowerShell or bash. And it’s often the case that someone in a company should have experience developing YARA rules.
      Second is the usage of third-party YARA rules. It’s an effective way to  have one more layer of protection. On the other hand, you need to maintain hunting and detection sets and fix rules and remove false positives anyway. Which once again means that someone needs experience in writing YARA rules.
      Third is that, as mentioned earlier, it might be really useful to have rules to look for organization-specific information or IT assets. It can be a hunting rule that triggers on specific project names, servers, domains, people, etc.So the short answer is yes, but it is important to invest time wisely, so as not to become overwhelmed with unrelated detections.

    3. What is the biggest challenge in your daily YARA rule writing/management process? Is it a particular malware family, actor, or perhaps a specific anti-detection technique?
    4. In our experience, certain file formats make writing YARA rules more difficult. For instance, malware stored in the Office Open XML file format is generally more tricky to detect than the OLE2 compound storage, because of the additional layer of ZIP compression. Since YARA itself doesn’t support ZIP decompression natively, you need to handle that with external tools. Other examples include HLL (high level language) malware, notably Python or Golang malware. Such executables can be several megabytes in size and contain many legitimate libraries. Finding good strings for detection of malicious code in such programs can be very tricky.

    5. Some malware uses YouTube or Twitter or other social media network comments for Command-and-Control. In that regard, where there are no C2 IPs, is it currently hard to detect these?
    6. Yes and no. Yes, it’s hard to get the real C2, because you need to reverse engineer or dynamically run malware to get the final C2. No, it’s relatively easy to detect, because from a ML   point of view  it’s a pure anomaly when very unpopular software goes to a popular website.

    7. So what is the size of the publicly available collections for people to use YARA against? What are some good ways to access a set of benign files, if you don’t have access to retrohunts/VTI?
    8. You can use YARA on clean files and malware samples. Creating a comprehensive clean collection is a challenge, but in general, to avoid false positives, we recommend grabbing OS  distributions and popular software. For this purpose, a good starting point could be sites like:
      https://www.microsoft.com/en-us/download
      https://sourceforge.net/
      ftp://ftp.elf.stuba.sk/pub/pc/

      For malware collection it’s a bit tricker. In an organization it’s easier, since you can collect executables from your own infrastructure. There are also websites with the collection of bad files for research purpose in Lenni Zeltser blogpost there is a good list of references:
      https://zeltser.com/malware-sample-sources/

      The final size of such a collection could be several terabytes or even more.

    9. Can YARA be used to parse custom packers?
    10. Yes, but not out-of-the-box. YARA has a modular architecture, so you can write a module that will first unpack the custom packer and then scan the resulting binary.
      A more common option is to run YARA against already unpacked objects, e.g. results of unpacking tools like Kaspersky Deep Unpack or sandbox and emulator dumps.

    11. What is the trade-off when we want to hunt for new malware using YARA rules? How many FPs should we accept when we need rules that detect new variants
    12. It depends what you want to catch. In general, from a research perspective, it’s ok to have an average FP rate up to 30%. On the other hand, production rules should have no FPs whatsoever!

    13. Could YARA help us to detect a fileless attack (malware)?
    14. Yes, YARA can scan memory dumps and different data containers. Also, you can run YARA against telemetry, but it may take some additional steps to achieve it and properly modify the ruleset.

    15. We can use YARA, together with network monitoring tools like Zeek, to scan files like malicious documents. Can YARA be used against an encrypted protocol?
    16. Only if you do a MITM (Man-in-the-Middle) and decrypt the traffic, since YARA rules most likely expect to run on decrypted content.

    17. What open source solution do you recommend in order to scan a network with YARA rules?
    18. YARA itself plus PowerShell or bash scripts; or, as an alternative, you can use an incident response framework and monitoring agent like OSquery, Google Rapid Response, etc. Other options are based on EDR solutions which are mostly proprietary.

    19. Which is better, YARA or Snort, for looking at the resource utilization for detection in live environments?
    20. YARA and Snort are different tools providing different abilities. Snort is designed specifically as a network traffic scanner, while YARA is for scanning files and/or memory. The best approach is to combine usage of YARA and Snort rules together!

    Questions about creating yara rules and training course questions

    1. Are we able to keep any of the materials after the course is finished?
    2. Yes, Kaspersky YARA cheat-sheets or training slides which include Kaspersky solutions to exercises are some of the things that are available for you to download and use even after the training session has finished.

    3. Is knowledge about string extraction or hashing sufficient to create solid YARA rules? Are there other things to learn as prerequisites?
    4. This depends on case-by-case knowledge. Strings and hashing are basic building blocks for creating YARA rules. Other important things are PE structure and preferences and anomalies in headers, entropy, etc. Also, to create rules for a specific file format, you need some knowledge of the architecture of the corresponding platform and file types.

    5. Can we add a tag to the rule that says it is elegant, efficient or effective, such as the tag on the exploit (in the metasploit): excellent, great, or normal?
    6. Sounds like a good idea. Actually, YARA rules also support tags in the name:
      https://yara.readthedocs.io/en/stable/writingrules.html

    7. Maybe you can explain more about the fact that metadata strings don’t have a direct impact on the actual rule.

      As we described before, a YARA rule can consist of meta, strings and conditions. While the condition is a mandatory element, the meta section is used only for providing more info about that specific YARA rule. and it is not at all used by the YARA scanning engine.

    8. ASCII is the default, so why do you need to put ASCII in the rule?
    9. Without ASCII, say ‘$a1 = “string” wide’, only the Unicode representation of the string would be searched. To search both ASCII and Unicode, we need ‘$a1= “string” ascii wide’.

    10. Can we use RegEx in YARA? Is nesting possible in YARA?
    11. Yes, it’s possible to use RegEx patterns in YARA. Be aware that RegEx patterns usually affect performance and can be rewritten in the  form of lists. But in some cases you just cannot avoid using them and the YARA engine fully supports them.
      Nesting is also possible in YARA. You can write private rules that will be used as a condition or as a pre-filter for your other rules.

    12.  Is there a limit on the number of statements in a YARA rule?
    13. We created several systems that create YARA rules automatically; and over time these have reached tens of megabytes in size. While these still work fine for us, having a very large number of strings in one rule can lead to issues. In many cases, setting a large stack size (see the yara -k option) helps.

    14. Can we say that YARA can be a double-edged sword? So a hacker can develop malware and then check with YARA if there’s anything similar out there and enhance it accordingly?
    15. Sure, although they would need access to your private stash of YARA rules. In essence, YARA offers organizations a way to add extra defenses by creating custom, proprietary YARA rules for malware that could be used against them. Malware developers can always test their creations with antivirus products they can just download or purchase. However, it would be harder to get access to private sets of YARArules.

    16. This is a philosophical question: Juan said YARA has democratized hunting for malware. How have APTs and malware authors responded to this? Do they have anti-YARA techniques?
    17. A few years ago we observed a certain threat actor constantly avoiding our private YARA rules for one to two months after we published a report. Although the YARA rules were very strong, the changes the threat actor made to the malware kind of suggested they knew specifically what to change. For instance, in the early days they would use only a few encryption keys across different samples, which we, of course, used in our YARA rules. Later, they switched to a unique key per sample.

    18. Would be possible to create a YARA rule to find Morphy’s games among a large set of chess games?
    19. Probably! Morphy was one of the most famous players from the so-called romantic chess period, characterised by aggressive openings, gambits and risky play. Some of the openings that Morphy loved, such as the Evans Gambit or the King’s Gambit accepted, together with playing with odds (Morphy would sometimes play without a rook against a weaker opponent), might yield some interesting games. Or, you could just search for ‘$a1 = “Morphy, Paul” ascii wide nocase’, perhaps together with’ $a2 = “1. e4″‘  🙂

    20. Would you recommend YARA for Territorial Dispute checks?
    21. Yes, of course. In essence, “Territorial Dispute” references a set of IoCs for various threat actors, identified through “SIGS”. While some of them have been identified, for instance in Boldi’s paper, many are still unknown. With YARA, you can search for unique filenames or other artifacts and try to find malware that matches those IoCs. Most recently, Juan Andres Guerrero-Saade was able to identify SIG37 as “Nazar”: check out his research here:
      https://www.epicturla.com/blog/the-lost-nazar

    Pro tips and tricks from the audience

    • Using YARA programmatically (e.g. via py/c) allows you to use hit callbacks to get individual string matches. This enables you to check for partial rule coverage (k of n strings matched but without triggering the condition), which is great for aiding rule maintenance.
    • On the top allowlist (clean stuff), known exploits and payloads should be also populated in our YARArule sets.
    • I always find it easier to maintain code by grouping the strings together.
    • As a dedicated/offline comment to JAG-S: The “weird” strings from the rule discussed most likely come from the reloc section (thus locking on encoded offsets), which would make the rule highly specific to a given build, even with a soft 15/22 strings required. That would still probably work well if the samples originate from a builder (i.e. configured stub) but should not generalize well. And for the IDA-extracted functions: consider wildcarding offsets to have better generalizing rules.
    • When it comes to strings – besides the strings from disk, mem, network dump, etc., bringing context and offset should be a best practice. Then rank the strings in the context of the malware. And this requires human expertise but can be easily adapted into the YARA rule building process.
    • Сombining, in a flexible way, the YARA rules build process with the enrichment of the recently announced Kaspersky Threat Attribution Engine, will be also GReAT 🙂

    Feel free to follow us on Twitter and other social networks for updates, and feel free to reach out to us to discuss interesting topics.
    On Twitter:

    ]]>
    https://securelist.com/why-master-yara/98600/feed/ 0 full large medium thumbnail
    GReAT Ideas follow-up https://securelist.com/great-ideas-follow-up/97816/ https://securelist.com/great-ideas-follow-up/97816/#respond Wed, 15 Jul 2020 10:00:13 +0000 https://kasperskycontenthub.com/securelist/?p=97816

    On June 17, we hosted our first “GReAT Ideas. Powered by SAS” session, in which several experts from our Global Research and Analysis Team shared insights into APTs and threat actors, attribution, and hunting IoT threats.

    Here is a brief summary of the agenda from that webinar:

    • Linking attacks to threat actors: case studies by Kurt Baumgartner
    • Threat hunting with Kaspersky’s new malware attribution engine by Costin Raiu
    • Microcin-2020: GitLab programmers ban, async sockets and the sock by Denis Legezo
    • The next generation IoT honeypots by Dan Demeter, Marco Preuss, and Yaroslav Shmelev

    Sadly, the two hours of the session were not enough for answering all of the questions raised, therefore we try to answer them below. Thanks to everyone who participated, and we appreciate all the feedback and ideas!

    Questions about threat actors and APTs

    1. How do you see Stonedrill deployment comparing now? Its discovery was based on lucky structural similarities with Shamoon, but do you see it actively used or correlating to the spread of this malware?

      There is some 2020 activity that looks like it could be Stonedrill related, but, in all likelihood, it is not. We are digging through details and trying to make sense of the data. Regardless, wiper activity in the Middle East region from late 2019 into early 2020 deployed code dissimilar to Stonedrill but more similar to Shamoon wipers. We stuck with the name “Dustman” – it implemented the Eldos ElRawDsk drivers. Its spread did not seem Stonedrill related.

      At the same time, no, the Stonedrill discovery was not based on luck. And, there are multiple overlaps between Shamoon 2.0 and Stonedrill that you may review under “Download full report” in ‘From Shamoon to StoneDrill‘ blogpost. You might note that Stonedrill is a somewhat more refined and complex code, used minimally.

      While the Shamoon spreader shared equivalent code with Orangeworm’s Kwampirs spreader, and are closely linked, we have not seen the same level of similarity with Stonedrill. However, several of the Shamoon 2.0 executables share quite a few unique genotypes with both Stonedrill and Kwampirs. In the above paper, we conclude that Stonedrill and Shamoon are most likely spread by two separate groups with aligned interests for reasons explained in the report PDF. Also, it may be that some of the codebase, or some of the resources providing the malware, are shared.

    2. Do the authors of Shamoon watch these talks?

      Perhaps. We know that not only do offensive actors and criminals attempt to reverse-engineer and evade our technologies, but they attempt to attack and manipulate them over time. Attending a talk or downloading a video later is probably of interest to any group.

    3. Are there any hacker-for-hire groups that are at the top level? How many hacker-for-hire groups do you see? Are there any hacker-for-hire groups coming out of the West?

      Yes. There are very capable and experienced hack-for-hire groups that have operated for years. We do not publicly report on all of them, but some come up in the news every now and then. At the beginning of 2019, Reuters reported insightful content on a top-level mercenary group and their Project Raven in the Middle East, for example. Their coordination, technical sophistication and agile capabilities were all advanced. In addition to the reported challenges facing the Project Raven group, some of these mercenaries may be made up of a real global mix of resources, presenting moral and ethical challenges.

    4. I assume Sofacy watches these presentations. Has their resistance to this analysis changed over time?

      Again, perhaps they do watch. In all likelihood, what we call “Sofacy” is paying attention to our research and reporting like all the other players.

      Sofacy is an interesting case as far as their resistance to analysis: their main backdoor, SPLM/CHOPSTICK/X-Agent, was modular and changed a bit over the course of several years, but much of that code remained the same. Every executable they pushed included a modified custom encryption algorithm to hide away configuration data if it was collected. So, they were selectively resistant to analysis. Other malware of theirs, X-Tunnel, was re-coded in .Net, but fundamentally, it is the same malware. They rotated through other malware that seems to have been phased out and may be re-used at some point.

      They are a prolific and highly active APT. They added completely new downloaders and other new malware to their set. They put large efforts into non-executable-based efforts like various credential harvesting techniques. So, they have always been somewhat resistant to analysis, but frequently leave hints in infrastructure and code across all those efforts.

      Zebrocy, a subset of Sofacy, pushed malware with frequent changes by recoding their malware in multiple languages, but often maintain similar or the same functionality over the course of releases and re-releases. This redevelopment in new and often uncommon languages can be an issue, but something familiar will give it away.

    5. Have we seen a trend for target countries to pick up and use tools/zero-days/techniques from their aggressors? Like, is Iran more likely to use Israeli code, and vice versa?

      For the most part, no, we don’t see groups repurposing code potentially only known to their adversary and firing it right back at them, likely because the adversary knows how to, and probably is going to watch for blowback.

      Tangentially, code reuse isn’t really a trend, because offensive groups have always picked up code and techniques from their adversaries, whether or not these are financially motivated cybercriminal groups or APT. And while we have mentioned groups “returning fire” in the past, like Hellsing returning spear-phish on the Naikon APT, a better example of code appropriation is VictorianSambuca or Bemstour. We talked about it at our T3 gathering in Cancun in October. It was malware containing an interesting zero-day exploit that was collected, re-purposed, touched up and re-deployed by APT3, HoneyMyte and others. But as far as we know, the VictorianSambuca package was picked up and used against targets other than its creator.

      Also, somewhere in the Darkhotel/Lazarus malware sets, there may be some code blowback, but those details haven’t yet been hammered out. So, it does happen here and there, maybe out of necessity, maybe to leave a calling card and shout-out, or to confuse matters.

    6. If using API-style programming makes it easier to update malware, why don’t more threat actors use it?

      I think here we are talking about Microcin last-stage trojan exported function callbacks. Nobody could tell for sure, but from my point of view, it’s a matter of the programmer’s experience. The “senior” one takes a lot into consideration during development, including architectural approach, which could make maintenance easier in the future.

      The “junior” one just solves the trojan’s main tasks: spying capabilities, adds some anti-detection, anti-analysis tricks, and it’s done. So maybe if the author has “normal” programming experience, he carefully planned data structures, software architecture. Seems like not all of the actors have developers like that.

    7. Have you seen proxying/tunneling implants using IOTs for APT operations, such as the use of SNMP by CloudAtlas? Do you think that’s a new way to penetrate company networks? Have you ever encountered such cases?

      We watched the massive Mirai botnets for a couple years, waiting to see an APT takeover or repurposing, and we didn’t find evidence that it happened. Aside from that, yes, APT are known to have tunneled through a variety of IOT to reach their intended targets. IOT devices like security web cams and their associated network requirements need to be hardened and reviewed, as their network connections may lead to an unintended exposure of internal resources.

      With elections around the world going on, municipalities and government agencies contracting with IT companies need to verify attack surface hardening and understand that everything, from their Internet-connected parking meters to connected light bulbs, can be part of a targeted attack, or be misused as a part of an incident.

    8. How often do you see steganography like this being used by other actors? Any other examples?

      Steganography isn’t used exclusively by the SixLittleMonkeys actor for sure. We could also mention here such malware as NetTraveller, Triton, Shamoon, Enfal, etc. So, generally, we could say the percentage of steganography usage among all the malicious samples is quite low, but it happens from time to time.

      The main reason to use it from malefactors’ point of view is to conceal not just the data itself but the fact that data is being uploaded or downloaded. E.g. it could help to bypass deep packet inspection (DPI) systems, which is relevant for corporate security perimeters. Use of steganography may also help bypass security checks by anti-APT products, if the latter cannot process all image files.

    Questions about KTAE (Kaspersky Threat Attribution Engine)

    For more information, please also have a look at our previous blogpost, Looking at Big Threats Using Code Similarity. Part 1, as well as at our product page.

    1. What are “genotypes”?
      Genotypes are unique fragments of code, extracted from a malware sample.
    2. How fine-grained do you attribute the binaries? Can you see shared authors among the samples?
      KTAE does not include author information per se. You can see shared relevant code and strings overlaps.
    3. Are genotypes and YARA rules connected?
      Not directly. But you can use genotypes to create effective YARA rules, since the YARA engine allows you to search for byte sequences.
    4. How many efforts do you see for groups to STEAL+REUSE attribution traces on purpose?
      We have seen such efforts and reported about them, for example with OlympicDestroyer
    5. How do you go about removing third-party code sharing?
      We incorporated our own intelligence to only match on relevant parts of the samples.
    6. Do genotypes work on different architectures, like MIPS, ARM, etc.? I’m thinking about IoT malware.
      Yes, they work with any architecture.
    7. What determines your “groundtruth”?
      Groundtruth is a collection of samples based on our 20+ years of research and classification of malware.
    8. Can KATE be implemented in-house?
      We offer multiple options for deploying KTAE. Please get in touch with us for more info: https://www.kaspersky.com/enterprise-security/cyber-attack-attribution-tool.
    9. For the attribution engine, would you expect APT-group malware authors to start integrating more external code chunks from other groups to try to evade attribution?
      We see such behavior; please refer to Question 12 above.
    10. Do you feel more manufacturers will follow Kaspersky’s suit in letting victims know the threat actors behind malware detections on endpoints?
      At the moment, KTAE is a standalone solution not integrated in endpoints.
    11. What is the parameter for looking at the similarity in malware code? Strings? Packer? Code? What else?
      KTAE uses genotypes to match similarities.
    12. How do I make a difference, if for example, I am a threat actor and reuse the code form some APT Group? How to define it is really the same actor and not just an impersonator who used the same code or malware, or reused the malware for my operation?
      KTAE handles code similarities for malware samples to provide relevant information on that basis. Further information to be used for attribution may be TTPs, etc. for which you may find our Kaspersky Threat Intelligence Services helpful.
    13. I guess the follow-up is,- will they be able to evade the attribution after watching these webinars, learning about the attribution engine?
      It’s known that such techniques can be used to do technical attribution on malware-sample basis. Attempts at evading these would mean knowing all the details and metrics and database entries (including updates) to check against something rather complex and difficult.
    14. Can you start taking the samples submitted by CYBERCOM and just post publicly what KTAE says in the future?
      We are posting certain interesting findings, e.g. on Twitter.
    15. How do we buy KTAE? Is it a private instance in our own org or hosted by you?
      We offer multiple options for deploying KTAE. Please get in touch with us for more info: https://www.kaspersky.com/enterprise-security/cyber-attack-attribution-tool.
    16. Can you expand on how you identify a genotype and determine that it is unique?
      Genotypes are unique fragments of code, extracted from a malware sample. As for uniqueness, there is a good reference: the Fruit Ninja Game. We played Fruit Ninja and extracted (sliced) genotypes from all good programs that are known to us, then we did the same with malicious samples and samples marked as APTs. After that operation, we knew all genotypes that belonged to good programs and removed them from the databases that belonged to bad ones. We also save the numbers of times genotypes appear in the samples, so we can identify the really unique stuff.
    17. How many zero-day vendors do you see with this engine?
      KTAE is not handling vulnerabilities but only code fragments and such, for similarity checks.
    18. In the future, do you see a product like KTAE being integrated into security offerings from Kaspersky, so that samples can be automatically scanned when detected as an alert, as opposed to individually uploading them?
      We are planning to do cross-product integration.
    19. Have you run The Shadowbrokers samples through KTAE and if so, were there any unexpected overlaps?
      Yes, we did. We found an overlap between Regin samples and cnli-1.dll
    20. Could it be easy for a threat actor to change code to avoid KTAE identification?
      Theoretically, yes. Assuming they produce never-before-seen genotypes, KTAE might miss classifying that malware. With that being said, generating completely new genotypes requires a lot of time and money, plus a lot of careful work. We wish threat actors good luck with that. 🙂
    21. When you attribute a campaign, do you also consider some aspects relating to sociopolitical events?
      At Kaspersky, we only do technical attribution, such as based on similarities in malware samples or TTPs of groups; we don’t do attribution on any entity, geopolitical or social level.

    Questions about IoT threats and honeypots

    If you want to join our honeypot project, please get in touch with us at honeypots@kaspersky.com.

    1. Do you have any IoT dataset available for academia?
      Please get in touch with us via our email address listed above (honeypots@kaspersky.com).
    2. How does a system choose which honeypots to direct an attack at?
      We developed this modular and flexible infrastructure with defined policies to handle that automatically, based on the attack.
    3. Okay, so, soon, IoT malware will do a vmcheck before it loads…. Then what?
      In our honeypots, we use our own methods to defeat anti-VM checks. Depending on future development of malware, we are also prepared to adjust these to match actual vmcheck methods.
    4. Do the honeypots support threat intelligence formats like STIX and TAXII?
      Currently, such a feature is not available yet. If there is interest, we can implement this to improve the use for our partners.
    5. Can anyone partner with you guys? Or do they need certain visibility or infrastructure to help out?
      Anyone with a spare IP-address and able to host a Linux system to receive attacks can participate. Please get in touch with us at honeypots[at]kaspersky[dot]com.

    Questions about Kaspersky products and services

    1. What new technology has Kaspersky implemented in their endpoint product? As EDR is the latest emerging technology, has Kaspersky implemented it in their endpoint product?
      Kaspersky Endpoint product contains EDR besides other cutting-edge technologies. There are more details listed here on the product page.
    2. What do you think of the Microsoft Exchange Memory Corruption Vulnerability bug? How can Kaspersky save the host system in such attacks?
      We should know the CVE number of the bug the question refers to. From what we know, one of “loud” bugs that was fixed recently was CVE-2020-0688. It is referenced here. We detect this vulnerability in our products using the Behavior Detection component with the verdict name: PDM:Exploit.Win32.GenericAlso, Kaspersky products have vulnerability scanners that notify you about vulnerabilities in installed software, and we also provide a patch management solution for business environments that helps system administrators handle software updates for all computers and servers on the corporate network.
    3. How can a private DNS protect the Host System from attacks?
      While DNS is a key component of the Internet, disrupting DNS queries can impact a large portion of Internet users. We know for sure the people running DNS Root servers are professionals and know their job really well, so we are not worried that much about Root servers being disrupted. Unfortunately, attackers sometimes focus on specific DNS resolvers and manage to disrupt large portions of the Internet, as in the 2016 DDoS against the Dyn DNS resolver. Although it is limited in its use, a private DNS system can protect against large DDoS attacks, because it will be private and may be harder to reach by the attackers.

    Advanced questions raised

    We are not afraid of tough questions; therefore, we did not filter out the following ones.

    1. Where can we get one of those shirts Costin is wearing?
      We are about to launch a GReAT merchandise shop soon – stay tuned.
    2. Who cut Jeff’s hair?
      Edward Scissorhands. He’s a real artist. Can recommend.
    3. Did Costin get a share from the outfits found in the green Lambert’s house when it got raided?
      We can neither confirm nor deny.
    4. Who is a better football team, Steelers or Ravens?
      Football? Is that the game where they throw frisbees?

    We hope you find these answers useful. The next series of the GReAT Ideas. Powered by SAS webinars, where we will share more of our insights and research, will take place on July 22. You can register for the event here: https://kas.pr/gi-sec

    As we promised, some of the best questions asked during the webinar will be awarded with a prize from the GReAT Team. The winning questions are:
    “Are there any hacker for hire groups that are at the very top level? How many hackers-for-hire groups do you see? Are there any hacker for hire groups coming out of the west?”
    “Can you expand on how you identify a genotype and determine that it is unique?”

    We will contact those who submitted these questions shortly.

    Feel free to follow us on Twitter and other social networks for updates, and feel free to reach out to us to discuss interesting topics.

    On Twitter:

    • Costin Raiu: @craiu
    • Kurt Baumgartner: @k_sec
    • Denis Legezo: @legezo
    • Dan Demeter: @_xdanx
    • Marco Preuss: @marco_preuss
    • Yury Namestnikov: @SomeGoodOmens

     

    ]]>
    https://securelist.com/great-ideas-follow-up/97816/feed/ 0 full large medium thumbnail
    What does it take to become a good reverse engineer? https://securelist.com/become-a-good-reverse-engineer/96743/ https://securelist.com/become-a-good-reverse-engineer/96743/#comments Wed, 22 Apr 2020 10:00:09 +0000 https://kasperskycontenthub.com/securelist/?p=96743

    How much money and effort does it take to become a good reverse engineer? Do you even need to be one?

    There are no universally acceptable answers to these questions. Software reverse engineering (RE) is not a science but a skillset combined with specific knowledge and backed by a lot of experience.

    For several years, we have been sharing the RE knowledge that we accumulated in the form of training sessions provided to paying customers. These sessions took from two days at the SAS conference to complete five workdays in the extended version, and covered many aspects of our own work, primarily in IDA Pro and the in-lab reverse-engineering framework.

    A typical piece of code disassembled in IDA Pro

    Due to the novel 2019 coronavirus disease, our schedule for the training sessions has changed completely. But not only this; the reversing landscape itself has changed since last year. Released in March 2019, the free and open-source reverse engineering tool called Ghidra lowered the barrier to entry into the field.

    The same piece of code viewed in Ghidra

    So, while we are all working from home and, hopefully, have time to learn something new, why not tear some binary code apart and pick up some reverse engineering skills? This may prove especially helpful if your work is related to malware, incident response or forensics.

    It is certainly not feasible to learn RE in one webinar. Within one hour, we will outline the typical workflow that we follow when analyzing malware. We will dissect real-life malicious code using both IDA Pro and Ghidra, and use some of the most useful features of these disassemblers.

    The rest, as in many other disciplines, comes with experience. And, we are still looking forward to seeing you in our reverse engineering training sessions at SAS Conference 2020 (two days) or elsewhere (a whole week!).

    ]]>
    https://securelist.com/become-a-good-reverse-engineer/96743/feed/ 2 full large medium thumbnail