Month: August 2021

Big Data Technology Importance in Human Resource Management

Big data in human resource management refers to the use of several data sources to evaluate and improve practices including recruitment, training and development, performance, compensation, and end-to-end business performance.

It has attracted the attention of human resource professionals who can analyze huge amounts of data to answer important questions regarding employee productivity, the impact of training on business performance, employee attrition, and much more. Using sophisticated HR software that gives robust data analytics professionals can make smarter and more accurate decisions.

In this article, let’s dive further and understand the role of big data technology in the field of HR, in today’s fast-paced world, is done. Where there is the need of analyzing massive quantities of different information.

Recruitment

Recruiting top talent for the firms is the primary task of HR departments. They are required to select candidates’ resumes and interview appropriate applicants till they get the right person they are looking for. Big data offers a broader platform for the recruitment process, which is the Internet.

By integrating recruitment with social networking, HR recruiters can find more information about the candidates, such as personal video pictures, living conditions, social relationships, ability, etc., so that the applicant’s image becomes more vivid and can recruit the right fit. Moreover, the candidates can learn more about the organization for which they will be giving an interview of the information and the recruitment process becomes more open and transparent.

Performance and compensation

Compensation is the most essential indicator, which attracts potential applicants, and getting a salary is one of the main objectives of employees to participate in the work. Traditional performance management systems often have more qualitative and less quantitative terms and compensation is out of touch with performance results.

Data analytics are solutions that identify meaningful patterns in a set of data. They are helpful to quantify the performance of a firm, product, operation, team, or even employees to support the business decisions. Data compilation to manipulation and statistical analysis are the core elements of big data analysis.

With the help of big data technology in performance management, professionals can record daily workload, the specific content of the work, and the task achievement of each employee. The professionals who have done talent management certification programs will have the advantage to receive better compensation. Sophisticated HR management software, which performs these operations enhances the work efficiency and also reduces enterprise investment in human capital. They are also useful for calculating salaries automatically to get better insights on performance standards.

Benefits packages

Employers can gather health-related data on their employees. As a result, more attractive and beneficial packages can be created. The certified HR professionals will get to enjoy more perks too. It is crucial to note that the organizations must be transparent about what they’re doing to avoid legal concerns related to discrimination practices. This can be done by revealing how they are collecting and using this data.

Training and development

Workforce training is an important part to enable the sustainable development of a business. Successful training can enhance employees’ level of knowledge and improve their performance. So that firms can keep their benefits of human resources in fierce competition and increase their profitability.

Traditional employee training needs a lot of manpower, material, and financial resources. With the advent of big data, information access and sharing have become more convenient. Employees can easily search and find the information they want to learn through the network at any time or anywhere. Workforce big data analysis – makes use of software to apply statistical models to employee-related data to optimize human resource management (HRM). It is also helpful to record data of studied behaviors of each employee, who not only can use the online system to analyze their own training needs but also can choose their favorite form of training.

To summarize

The role of big data in human resource management has become more prominent. The value of data has certainly accelerated the way a business functions. This rapidly-growing technology enables HR professionals to effectively manage employees so that business goals can be accomplished more quickly and efficiently.

What is the most robust binary-classification performance metric?

Accuracy, F1, or TPR (a.k.a recall or sensitivity) are well-known and widely used metrics for evaluating and comparing the performance of machine learning based classification.

But, are we sure we evaluate classifiers’ performance correctly? Are they or others such as BACC (Balanced Accuracy), CK (Cohen’s Kappa), and MCC (Matthews Correlation Coefficient) robust?

My latest research on benchmarking classification performance metrics (BenchMetrics) has just been published with SpringerNature in Neural Computing and Applications (SCI, Q1) journal.

Read here: https://rdcu.be/cvT7d

Highlights

  • A benchmarking method is proposed for binary-classification performance metrics.
  • Meta-metrics (metric about metric) and metric-space concepts are introduced.
  • The method (BenchMetrics) tested 13 available and two recently proposed metrics.
  • Critical issues are revealed in common metrics while MCC is the most robust one.
  • Researchers should use MCC for performance evaluation, comparison, and reporting.

Abstract

This paper proposes a systematic benchmarking method called BenchMetrics to analyze and compare the robustness of binary-classification performance metrics based on the confusion matrix for a crisp classifier. BenchMetrics, introducing new concepts such as meta-metrics (metrics about metrics) and metric-space, has been tested on fifteen well-known metrics including Balanced Accuracy, Normalized Mutual Information, Cohen’s Kappa, and Matthews Correlation Coefficient (MCC), along with two recently proposed metrics, Optimized Precision and Index of Balanced Accuracy in the literature. The method formally presents a pseudo universal metric-space where all the permutations of confusion matrix elements yielding the same sample size are calculated. It evaluates the metrics and metric-spaces in a two-staged benchmark based on our proposed eighteen new criteria and finally ranks the metrics by aggregating the criteria results. The mathematical evaluation stage analyzes metrics’ equations, specific confusion matrix variations, and corresponding metric-spaces. The second stage, including seven novel meta-metrics, evaluates the robustness aspects of metric-spaces. We interpreted each benchmarking result and comparatively assessed the effectiveness of BenchMetrics with the limited comparison studies in the literature. The results of BenchMetrics have demonstrated that widely used metrics have significant robustness issues, and MCC is the most robust and recommended metric for binary-classification performance evaluation.

A critical question for the research community who wish to derive objective research outcomes

The chosen performance metric is the only instrument to determine which machine learning algorithm is the best.

So, for any specific classification problem domain in the literature:

Question: If we evaluate the performances of algorithms based on MCC will the comparisons and ranks change?

Answer: I think so. At least, we should try and see.

Question: But how?

Answer:

Please, share the results with me.

Citation for the article:

Canbek, G., Taskaya Temizel, T. & Sagiroglu, S. BenchMetrics: a systematic benchmarking method for binary classification performance metrics. Neural Comput & Applic (2021). https://doi.org/10.1007/s00521-021-06103-6

Weekly Entering & Transitioning into a Business Intelligence Career Thread. Questions about getting started and/or progressing towards a future in BI goes here. Refreshes on Mondays: (August 30)

Welcome to the ‘Entering & Transitioning into a Business Intelligence career’ thread!

This thread is a sticky post meant for any questions about getting started, studying, or transitioning into the Business Intelligence field. You can find the archive of previous discussions here.

This includes questions around learning and transitioning such as:

  • Learning resources (e.g., books, tutorials, videos)
  • Traditional education (e.g., schools, degrees, electives)
  • Career questions (e.g., resumes, applying, career prospects)
  • Elementary questions (e.g., where to start, what next)

I ask everyone to please visit this thread often and sort by new.

submitted by /u/AutoModerator
[link] [comments]

Cyber Citizenship Education is Essential

Scholars and researchers from the think tank New America recently released an education policy initiative titled, Teaching Cyber Citizenship — Bridging Education and National Security to Build Resilience to New Online Threats. The report outlines challenges facing educators when it comes to preparing students for the online world, describes the broad spectrum of reasons why it’s important that they are properly prepared, and provides resources and potential solutions for communities and school systems to adopt.

Joining us this week are two of the report’s coauthors, Lisa Guernsey, director of New America’s Teaching, Learning and Tech Program, and Peter W. Singer, strategist and senior fellow.

This podcast was produced in partnership with the CyberWire.

The post Cyber Citizenship Education is Essential appeared first on Recorded Future.

Understanding Self Supervised Learning

In the last blog, we discussed the opportunities and risks of foundational models. Foundation models are trained on a broad dataset at scale and are adaptable to a wide range of downstream tasks. In this blog, we extend that discussion to learn about self-supervised learning, one of the technologies underpinning foundation models.

NLP has taken off due to Transformer-based pre-trained language models (T-PTLMs). Transformer-based models like GPT and BERT are based on transformers, self-supervised learning, and transfer learning. In essence, these models build universal language representations from large volumes of text data using self-supervised learning and then transfer this knowledge to subsequent tasks. This means that you do not need to train the downstream(subsequent) models from scratch.  

In supervised learning, training the model from scratch requires many labelled instances that are expensive to generate.  Various strategies have been used to overcome this problem. We can use Transfer learning to learn in one context and apply it to a related context. In this case, the target task should be similar to the source task. Transfer learning allows the reuse of knowledge learned in source tasks to perform well in the target task. Here the target task should be similar to the source task. The idea of transfer learning originated in Computer vision, where large pre-trained CNN models are adapted to downstream tasks by including few task-specific layers on top of the pre-trained model, which are fine-tuned on the target dataset.

Another problem was: Deep learning models like CNN and RNN cannot easily model long-term contexts. To overcome this problem, the idea of transformers was proposed. Transformers contain a stack of encoders and decoders, and they can learn complex sequences.

The idea of Transformer-based pre-trained language models (T-PTLMs) evolved by combining transformers and self-supervised learning (SSL) in the NLP research community. Self-supervised learning allows the transformers to learn based on the pseudo supervision provided by one or more pre-training tasks. GPT and BERT are the first T-PTLMs developed using this approach.  SSLs do not need a large amount of human-labelled data because they can learn from the pre-trained data.

Thus, Self-Supervised Learning (SSL) is a new learning paradigm that helps the model learn based on the pseudo supervision provided by pre-training tasks. SSLs find applications in areas like Robotics, Speech, and Computer vision. 

SSL is similar to both unsupervised learning and supervised learning but also different from both. SSL is similar to unsupervised learning in that it does not require human-labelled instances. However, SSL needs supervision via the pre-training stage (like supervised learning). 

In the next blog, we will continue this discussion by exploring a survey of transformer-based models

 

Source: Adapted from

AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing

Katikapalli Subramanyam Kalyan, Ajit Rajasekharan, and Sivanesan Sangeetha

 

Image source pixabay – Children learning without supervision

 

 

 

 

Residual Networks in PyTorch

Hey guys! I wrote a simple tutorial and explanation on one of my favourite deep learning inventions in the past decade:

https://taying-cheng.medium.com/building-a-residual-network-with-pytorch-df2f6937053b

submitted by /u/Ok-Peanut-2681
[link] [comments]

Scroll to top