Month: December 2020

SolarWinds Attribution: Are We Getting Ahead of Ourselves?

Note: This blog is an abstract of an in-depth analysis on SolarWinds attribution. Download the complete analysis here.

A previously version of this report incorrectly attributed disclosure of Jake Williams’ work for the National Security Agency’s Tailored Access Operations group to Sandworm. This disclosure was conducted by ShadowBrokers.

The recent expansive intrusion campaign of over half a dozen government agencies and as-yet unknown other organizations through malicious backdoors in the SolarWinds Orion platform is already one of the most significant acts of cyber espionage in history. This intrusion, dubbed SUNBURST/Solorigate, appears intended for information theft and espionage rather than destruction, placing this campaign within the realm of counterintelligence, not just incident response. Analyzing this incident within the realm of counterintelligence may fill the gap of descriptive language for this incident rather than bipolar descriptions of “sophisticated” or in-depth analysis which may add to confusion for network defenders. Additionally, only a handful of companies have direct access and the investigative resources to gain meaningful insights into the technical components of the backdoor. The actor is a different story.

Like most complex, public intrusions, attribution has been messy. FireEye has named the actor behind this intrusion “UNC2452,” and Volexity dubbed the threat actor “Dark Halo,” stating that the actor is the same as UNC2452, though FireEye has not substantiated that claim. Adding further complexity, Washington Post correspondent Ellen Nakashima cited unnamed government sources claiming Russian actors, in particular APT29, are responsible for the attack. Members of the U.S. Congress have also publicly accused Russia, and in particular the Russian Foreign Intelligence Service (SVR), as the responsible party, and added calls for response. Microsoft President Brad Smith has also called for strong action. While we expect these organizations have far more insight into the nature of the breach, as well as classified sources of intelligence information, calls for strong response should include publicly disclosed information to support accusations.

Public evidence for these claims is currently scant. Some, including Jake Williams, who runs Rendition Security and teaches for the SANS Institute, has said that technical evidence is forthcoming, but cannot be disclosed without tipping off the adversaries to missteps and giving them a means to cover their tracks. Still, the lack of public evidence gives rise to claims that other actors, even perhaps other countries, may be responsible, a claim made by President Donald Trump as well.

Intelligence analysis, properly conducted, combats bias. Bias can lead to missteps in policy. Engaging in policy discussions about proportional responses (or, at times, very disproportionate response) without strong evidence is potentially dangerous. As rumors of attribution to Russia circulate, attribution prior to evidence is premature and myopic, biasing the analyst to only certain behaviors and actors. Further, intelligence analysis provides both strategic and tactical guidance for responses. At the strategic level, we can be assured that responses are coordinated and proportional. At the tactical level, defenders can apply intelligence to seed proactive activities, such as hunting for behaviors after indicators run dry.

Among information security researchers, some discussion has occurred regarding the possibility alternate actors, such as APT41, may ultimately be found responsible. APT41, also known as Winnti and Barium, has been linked to the People’s Republic of China, and previously conducted attacks which beg comparison with the SUNBURST/Solorigate attack. (Note: Recorded Future has synonymized several named groups, including APT41, Axiom Hacking Group, Barium, Blackfly, Dogfish, Ragebeast, Wicked Panda, Winnti Group, as Winnti Umbrella Group.) In March 2017, APT41 executed a supply chain attack by breaching the company which made CCleaner, a system cleaner software. Researchers from Cisco Talos and Morphisec uncovered the campaign, which ultimately spread to 2.27 million computers. While these comparisons fall well short of the requirements for attribution, APT41 does merit consideration as a candidate actor group responsible for the SUNBURST/Solorigate breach. Enter threat intelligence.

We approached our analysis using existing techniques in order to focus on attribution and adversary mapping. We pursued methodologies including mapping MITRE ATT&CK techniques, victimology, temporal indications, and historic use of indicators to give insight into attacker motivation and intent. We analyzed both public information as well as information from Recorded Future’s historic index to determine a set of unique characteristics about this campaign. Our goal was not to conclusively attribute this attack, but rather to review existing data through the lens of intelligence analysis and contribute to conversation on adversary tracking.

To read our in-depth analysis, download the complete report.

The post SolarWinds Attribution: Are We Getting Ahead of Ourselves? appeared first on Recorded Future.

Insights about LOL using Big data analysis! [Code attached]

Hello All,
Hope you are safe and sound 🙂 This semester we are taking this course called “Big data analysis” and I tried to get insight by using the public dataset provided by Riot games. I used patch 10.9 (you can find it here. The insights I got are: – Champions (pick – ban – win – lose) rates – Champion Duos and synergies – Item (pick – win) rates – Item synergies (with champions and classes) – Item suggestion (by champion) – Also my friend did Spark streaming code that uses a heuristic that can decide which team will win in a live match, and which player of the opposite team is a threat.

Here is the GitHub repo you will find all figures and top 10 outputs in the readme section.

If you like it, please leave a star on GitHub, and If you have any suggestion for ideas or future collaboration you can reply here or pm me 🙂

PS: The idea originated from my TA who was a great help and mentor and also the data wasn’t that polished and I never played LOL (haha) so excuse any mistakes or something that doesn’t make sense (maybe I got the game logic wrong).

Take care Thank you

submitted by /u/moustafa-7
[link] [comments]

Weekly Entering & Transitioning into a Business Intelligence Career Thread. Questions about getting started and/or progressing towards a future in BI goes here. Refreshes on Mondays: (December 28)

Welcome to the ‘Entering & Transitioning into a Business Intelligence career’ thread!

This thread is a sticky post meant for any questions about getting started, studying, or transitioning into the Business Intelligence field. You can find the archive of previous discussions here.

This includes questions around learning and transitioning such as:

  • Learning resources (e.g., books, tutorials, videos)
  • Traditional education (e.g., schools, degrees, electives)
  • Career questions (e.g., resumes, applying, career prospects)
  • Elementary questions (e.g., where to start, what next)

I ask everyone to please visit this thread often and sort by new.

submitted by /u/AutoModerator
[link] [comments]

Where to go to research size of a market?

I have a client that wants me to find market data on a small niche market. The end goal is to use the data to see what new products are reasonable to introduce into the market based on my research. Has anyone done something like this? How did you approach this?

submitted by /u/immunobio
[link] [comments]

We’re rebranding PrestoSQL as Trino

We’re not going anywhere – it’s the same software, by the same people, just under a new name.

Learn about why we’re doing it: http://trino.io/blog/2020/12/27/announcing-trino.html

submitted by /u/martintraverso
[link] [comments]

AI Enables Predictability and Better Business

Joining us this week is Aarti Borkar, vice president of product for IBM Security. She shares the story of her professional journey, starting out as a self-described data-geek through the path that led her to the leadership position she holds today.

Aarti also shares her views on artificial intelligence, and how she believes it can be an enabler for security and the business itself. And we’ll get her thoughts on welcoming new and diverse talent to the field.

This podcast was produced in partnership with the CyberWire.

The post AI Enables Predictability and Better Business appeared first on Recorded Future.

A simple way to understand the statistical foundations of data science

Introduction

There are six broad questions which can be answered in data analysis according to an article called “What is the question?” By Jeffery T. Leek, Roger D. Peng. These questions help to frame our thinking of data science problems. Here, I propose that these questions also provide a unified framework for relating statistics to data science.

 

The six questions according to Jeffery Leek and Roger Peng are

A descriptive question seeks to summarize a characteristic from a dataset. You do not interpret the results. For example: number of fresh fruits and vegetables served in a day.

 

An exploratory question is a question where you try to find a pattern of a relationship between variables i.e. you aim to generate a hypothesis. At this stage, you do not test the hypothesis. You are merely generating a hypothesis. More generally, you could say that you are proposing a hypothesis which could hold in a new sample from the population. 

 

An inferential question restates the proposed hypothesis in the form of a question that would be answered by analyzing the data. You are validating the hypothesis i.e. does the observed pattern hold beyond the data at hand. Most statistical problems are inferential problems.

 

A predictive question would be one where you predict the outcome for a specific instance.

 

A causal question: Unlike the predictive and inferential questions, causal questions relate to averages in a population i.e. how changing the average in one measurement would affect another. Causal questions apply to data in randomized trials and statistical experiments where you try to understand the cause behind an effect being observed by designing a controlled experiment and changing one factor at a time.

 

A mechanistic question asks what is the mechanism behind an observation i.e. how a change of one measurement always and exclusively leads to a deterministic behaviour in another. Mechanistic questions apply typically to engineering situations.

Implications for Data Science

The six questions framework raise awareness about which question is being asked and aim to reduce the confusion in discussions and media. However, they also help to provide a single framework to co-relate statistics problems to data science. 

Re the six questions:

  • Descriptive and exploratory techniques are often considered together
  • Predictive and Inferential questions can also be combined

So we could consider four questions:

  • Exploratory
  • Inferential
  • Causal
  • Mechanistic

 

Why this framework matters?

That’s because it provides questions which may not have been encountered before. Here are three examples

 

Conclusion

The six questions provide rigor and simplicity of analysis. I also find that these questions provide a comprehensive set of questions that link statistics to data science. They help you to think beyond the norm i.e.  beyond problems that you encounter to consider all possible problems.

 

References

What is the question? By Jeffery T. Leek, Roger D. Peng

image source

pixabay

Scroll to top