Get the Latest News and Press Releases

DSC Weekly Digest 3 August 2021

  • With recent socio-economic disruptions caused by the pandemic, many industries struggle to align production and stocking with rapidly shifting purchasing demands. Our ebook highlights how injecting AI into existing business intelligence solutions can greatly enhance the ability to predict future demand for goods, even in uncertain and dynamic times. Read More

AI, Gaming and the Metaverse

There were several interesting announcements this week about the Rise of the Metaverse. Depending upon who you talk to, it’s the next big thing, Internet 3.0, Snow Crash careening into Ready Steady Go, with vibes of Tron thrown in for good measure. It’s Virtual Reality 2.0. And it’s coming to a screen near you tomorrow … or maybe in fifty years. You can never tell with these kinds of things.

There is a certain innate similarity between virtual reality and self-driving cars. To hear the press releases from either 1999 or 2015, VR and autonomous vehicles were literally just around the corner, an engineering problem, not a conceptual problem. By 2021, the first truly consumer autonomous vehicles were supposed to be coming off the assembly lines, and VR should have been achieved by now. Instead, AVs are still at least a decade away and truly immersive, fully interactive VR should be a thing.

Now, anyone who games regularly can tell you that immersive realities are definitely here – so long as you’re very careful to constrain how far out of the box someone can go. Anyone who’s played Halo or Overwatch or even Dead Red Redemption can tell you that the games are becoming quite realistic, and arguably games such as the Sims (version x) attest to the ability to have multiple individuals within a given simulation.

As with AVs, the challenge ultimately isn’t engineering – it’s social. Second Life explored the themes of virtual reality in a social sense. What happened afterward was simple: people discovered that virtual conversations and virtual sex with virtual avatars was, at the end of the day, boring and more than a little creepy. It was like going to a bar without any alcohol. 

We enjoy games precisely because we are, to quote Terry Pratchett, narrative creatures. We are natural storytellers, and we love both being told and participating in stories. We love pitting ourselves against others, seeing ourselves as fighting the good fight or solving deep mysteries that would have stymied Sherlock Holmes. Psychologists also talk about the dangers of escapism, but games are attractive primarily because most IRL stories are not very exciting.

There’s some significant money to be made in Extended Reality (XR), but it’s important to understand there is that for it to truly work, XR needs to concentrate as much on the metadata, the story, as it does on the various communication protocols and representations.

The latter is not insignificant, mind you. The virtual world is the quantum cognate of the real world. Identity and uniqueness are intrinsic to the physical world, and creating duplicates that travel through real space and time is a nearly insurmountable problem. In the virtual world, however, uniqueness and identity are simply abstract concepts, and creating copies that can persist for any significant length of time can prove difficult at best (this is what blockchain is supposed to do, but we’re discovering the very real energy costs in even approximating uniqueness).

Yet, ultimately, the real challenge will come when the various players in this space recognize that without compelling content where immersion means that people become a part of the narrative, not simply an avatar walking around stiffly in a pretty landscape, XR will fail. I’d also like to believe that ultimately it will take agreement on standards for all of the fiddling bits, like identity management, concurrency, data flows,, and so forth, to all come together so that moving from one narrative to another becomes feasible (or even makes sense), but I suspect that will only come once the landscape has become nearly irrevocably fractured. There are too many people with dollar signs in their eyes at this stage to expect any difference.

What does this mean to data science? Easy – a game is simply a simulation with a better plot. AI is intimately tied to the concept of Metaverse, and will only become more so over time.

Some Recent Changes

There are a couple of new changes to the newsletter. The first is changing the DSC article listings so that they show authors and publication dates. We’re proud of our writers, and I feel that by posting who wrote what will make it easier for you as a writer to go to those writers that you enjoy most, as well as helping you discover new and different viewpoints. Clicking on the writer links will give you a feed showing all of their previous articles.

Another, more subtle change is that as a member of DSC clicking on a Tech Target article will take you to the article without triggering the paywall. You can now enjoy more of our parent company’s content, and get perspective from industry leaders. It will also help us track what’s most important to you, our readers. Note that you can only see TechTarget content when coming from the Newsletter or from the DSC site itself.

In media res,

Kurt Cagle
Community Editor,
Data Science Central

To subscribe to the DSC Newsletter, go to Data Science Central and become a member today. It’s free! 

Data Science Central Editorial Calendar

DSC is looking for editorial content specifically in these areas for July, with these topics having higher priority than other incoming articles.

  • MLOps and DataOps
  • Machine Learning and IoT
  • Data Modeling and Graphs
  • AI-Enabled Hardware (GPUs and similar tools)
  • Javascript and AI
  • GANs and Simulations
  • ML in Weather Forecasting
  • UI, UX and AI
  • Jupyter Notebooks
  • No-Code Development
  • Metaverse

DSC Featured Articles

Picture of the Week
Sample Data Architecture Diagram
Sample Data Architecture Diagram


To make sure you keep getting these emails, please add to your browser’s address book.

This email, and all related content, is published by Data Science Central, a division of TechTarget, Inc.

275 Grove Street, Newton, Massachusetts, 02466 US

You are receiving this email because you are a member of TechTarget. When you access content from this email, your information may be shared with the sponsors or future sponsors of that content and with our Partners, see up-to-date  Partners List  below, as described in our  Privacy Policy . For additional assistance, please contact:

copyright 2021 TechTarget, Inc. all rights reserved. Designated trademarks, brands, logos and service marks are the property of their respective owners.

Privacy Policy  |  Partners List

Protect Against BlackMatter Ransomware Before It’s Offered

Insikt Group

Editor’s Note: The following post is an excerpt of a full report. To read the entire analysis, click here to download the report as a PDF.

Insikt Group reverse-engineered the Linux and Windows variants of BlackMatter ransomware and provided a high-level overview of the functionality in addition to IOCs, utilities, and detections. The intended audience of this research is threat intelligence professionals and those interested in a technical overview of the new ransomware variant.

Executive Summary

Insikt Group analyzed Windows and Linux variants of BlackMatter ransomware, a new ransomware-as-a-service (RaaS) affiliate program founded in July 2021. During our technical analysis, we found that both variants accomplish similar goals of encrypting a victim’s files and appear to have been developed by a relatively sophisticated group. The Windows version of the ransomware employs several obfuscation and anti-reverse engineering techniques, suggesting that it was created by an experienced ransomware developer. BlackMatter’s Linux variant is another example of an emerging trend of malware targeting Linux-based systems, including ESXi and network-attached storage (NAS) devices. Recorded Future has provided reverse-engineering utilities, a YARA rule, and IOCs that organizations can use to hunt or detect the ransomware.

Editor’s Note: This post was an excerpt of a full report. To read the entire analysis, click here to download the report as a PDF.

The post Protect Against BlackMatter Ransomware Before It’s Offered appeared first on Recorded Future.

China’s Ambitions Toward Digital Colonization

Recorded Future’s Insikt Group recently released research outlining China’s attempts at what they describe as digital colonization. A focus of China’s efforts involve providing attractive, cost-effective infrastructure deals for developing African nations, using technology sourced from China, technology that includes substantial surveillance capabilities. For some regimes this is all the better, but for others it means joining the online global marketplace in exchange for allowing Chinese authorities an unfettered view into their nation’s online activities. 

To help us understand the implications of this bargain we welcome back to our program Recorded Future’s Charity Wright, expert cyber threat intelligence analyst. 

This podcast was produced in partnership with the CyberWire.

The post China’s Ambitions Toward Digital Colonization appeared first on Recorded Future.

An Introduction to Statistical Sampling

Image Source: Statistical Aid

Sampling is a statistical procedure of selecting some representative part from an existing population or study area. Specifically, draw a sample from the study population using some statistical methods. For example-
if we want to calculate the average age of Bangladeshi people then we can not deal with the whole population. In that time we must have to deal with some representative part of this population. This representative part is called sample and the procedure is called sampling.

Why need sampling

—  It makes possible the study of a large population which contains different characteristics.
—  It is for economy.
—  It is for speed.
—  It is for accuracy.
—  It saves the sources of data from being all consumed.
Sometimes we can’t work with population such as blood test, in that situation sampling is must.


Probability Sampling

It is based on the concept of random selection where each population elements have a non-zero chance to occur as a sample. Sampling techniques can be divided into two categories: probability and non-probability. Randomization or chance is the core of probability sampling techniques.
For example, if a researcher is dealing with a population of 100 people, each person in the population would have the odds of 1 out of 100 for being chosen. This differs from non-probability sampling, in which each member of the population would not have the same odds of being selected.

Different types of probability sampling

·    In opinion poll, a relatively small number of persons are interviewed and their opinions on current issues are solicited in order to discover the attitude of the community as a whole.
·    At border stations, customs officers enforce the laws by checking the effects of only a small number of travelers crossing the border.
·    A departmental store wises to examine whether it is losing or gaining customers by drawing a sample from its lists of credit card holders by selecting every tenth name.
·    In a manufacturing company, a quality control officer take one sample from every lot and if any sample is damage then he reject that lot.
—  Creates samples that are highly representative of the population.
—  Sampling bias is tens to zero.
—  Higher level of reliability of research findings.
—  Increased accuracy of sample error estimation.
—  The possibility to make inferences about the population.


—  Higher complexity compared to non-probability sample.
—  More time consuming, especially when creating larger sample.
—  Usually more expensive.

Non-Probability sampling

The process of selecting a sample from a population without using statistical probability theory is called non-probability sampling.
Lets say that the university has roughly 10000 students. These 10000 students are our population (N). Each of the 10000 students is known as a unit, but its hardly possible to get known and select every student randomly.
Here we can use Non-Random selection of sample to produce a result.


·   It can be used when demonstrating that a particular trait exist in the population.
·    It can also be useful when the researcher has limited budget, time and workforce.


·        Select samples purposively
·        Enable researchers to reach difficult to identify members of the population.
·        Lower cost
·        Limited time.


Difficult to make valid inference about the entire population because the sample selected is not representative.
We cannot calculate confidence interval. Launches Free Outlier Analysis Boxplot Template

Visit the Statistics page and download a free outlier analysis boxplot template from the Topics section. Boxplot analysis can solve business analysis problems. This free outlier analysis template is one of several posted in Cool Number Crunching Templates.

Weekly Entering & Transitioning into a Business Intelligence Career Thread. Questions about getting started and/or progressing towards a future in BI goes here. Refreshes on Mondays: (August 02)

Welcome to the ‘Entering & Transitioning into a Business Intelligence career’ thread!

This thread is a sticky post meant for any questions about getting started, studying, or transitioning into the Business Intelligence field. You can find the archive of previous discussions here.

This includes questions around learning and transitioning such as:

  • Learning resources (e.g., books, tutorials, videos)
  • Traditional education (e.g., schools, degrees, electives)
  • Career questions (e.g., resumes, applying, career prospects)
  • Elementary questions (e.g., where to start, what next)

I ask everyone to please visit this thread often and sort by new.

submitted by /u/AutoModerator
[link] [comments]

Scroll to top