Introduction to I/ON-One and CPU tied By engineering jobs and talents

Introduction to I/ON-One and CPU tied By engineering jobs and talents
Latest Job Opportunities in India

Latest Job Opportunities in India

Discover top job listings and career opportunities across India. Stay updated with the latest openings in IT, government, and more.

Check Out Jobs!
Read More

🌟 Introduction to I/ON-One and CPU tied By engineering jobs and talents

uncovered

Click Enter or click to view the image in full size

Image by Michael Dziedzic on Unlike

Are you looking to improve the performance of Ruby on Rails? In this article, I explore the definition of applications associated with I/O and CPU, their differences, and their impact on resource customization, selection of devices, and improving performance. Whether you are a beginner or an experienced developer, this article will provide you with valuable visions.

By ihor Pohasii

A little from the background

As a member of the Black Hole team in the UK, I deal with different types of challenges, one of which was recently resolved using a multi -threading approach, presented by a colleague.

This colleague showed me a solution that improves one of the end points using themes.

Subjects It is the concept that helps you use synchronization in Ruby. This made me review the topics myself, and in the end I write this article.

Read Ruby’s instructions, we may face the following statement:

“As a result of the use of topics, you will have a multi -strapping Ruby program, which can accomplish things faster.

But one warning:

In MRI (Ruby Matz), the default method for running Ruby applications, you will only benefit from interconnection indicators when running I/O restricted applications.

I remembered I/O ties again from the university. Then, I immediately started wondering about the other types of limits. This is how I ended up making a presentation on this topic during the JobandTalent rear meetings. This article is based on it.

Release responsibility: Please handle this piece as an introduction to the topic, as there is much more to discover it.

What are the I/O and CPU applications associated with?

Let’s start with the definition.

Input/output applications are only limited input and output operations.

On the other hand, applications associated with the CPU are limited to the capacity of the CPU (CPU).

Take a look at the example.

The graph below shows the implementation of two programs.

Click Enter or click to view the image in full size

The gray rectangular units are the central treatment unit bursts, that is, the time when the CPU calculates things and solve the various equations. The distances between them are the time spent waiting for inputs and outputs.

If the program is used from 90 to 100 % of your CPU, it is linked to the CPU. The top of the graph represents it.

Advanced sports accounts and sorting algorithms and machine learning models are known examples of applications associated with the CPU.

The bottom of the image depicts the I/O application. As you can see in the picture, such programs have the shortest central processing unit and a longer waiting period for inputs and outputs.

Examples of real life for applications associated with input/directing include programs that interact with databases.

Applications that deal with large amounts of data are related to I/O if you need to read or add this data to a disk.

Differences

This is all for high -level theory.

But why even ruby ​​documents highlight the need to understand the differences between I/O and CPU solutions?

In fact, they greatly influence when it comes to the following:

  • Determination of resources (more enlightened decisions on setting the appropriate amount for memory, disk space, CPU strength, etc.),
  • The choice of devices (different programs that have the requirements of different devices, for example, the CPU application from the fastest central processing unit may benefit with multiple ropes, while the entry/output -related authorization may benefit from faster storage or increased memory),
  • Improve performance (determining possible bottlenecks, etc.).

Realistic examples

I/o app is binding

An example of a single symbol

Let’s take an example of the Ruby I/Oo icon running in one topic.

In our example, I will use .TXT file ~ 400MB. All executions will be isolated in Docker containers (in favor of reducing the resources available during implementation).

#  Start the timer

start = Time.now

file = File.open("input.txt", "r")
contents = file.read
result = contents.scan(/w+/).size

elapsed = Time.now - start

puts "Elapsed time: #🔗 seconds"
#
# Elapsed time: 59.997594584 seconds

It took us to read the file and repetition through its content in one topic for approximately one minute. As you can see, there were no expensive operations that would cause long central treatment unit transfers, and I spent most of the time reading the file. We can link the logic of this program to the scheme, and you can know without hesitation that this is a typical example of the IO code.

An example is a multi -overseeing code

Let’s solve the same problem as in the above example, but this time using a multi -indicator of interconnection.

# Define the number of threads
threads = 4

start = Time.now

# Split the input file into segments for each thread
file = File.open("input.txt", "r")
segment_size = (file.size.to_f / threads).ceil
segments = (0...threads).map do |i|
start_pos = i * segment_size
end_pos = (start_pos + segment_size - 1, file.size).min
file.seek(start_pos)
file.read((segment_size, file.size - start_pos).min)
end

# Start a new thread for each segment
threads.times.map do |i|
Thread.new(segments(i)) do |segment|
# Perform the IO-bound file reading operation on each segment
result = segment.scan(/w+/).size
end
end.each(&:join)

elapsed = Time.now - start

# Print the elapsed time
puts "Elapsed time: #Full Article seconds"
#
# Elapsed time: 38.827828157 seconds

Now the time elapsed is much shorter.

Instead of reading the file from start to finish in the same thread, we presented 4 interconnected indicators, and divided the file into 4 pieces (selecting the start and end), then wipe it simultaneously.

Before performing any conclusions, let’s jump to an example of a tied CPU.

Central processing unit solution

To show the direct impact of the available CPU resources for the implementation environment, I will use the identification of the Docker CPU to operate the containers.

Docker:

-Cpuset-carpus- Reducing the specific central processing units that a container can use.

A separate menu or a detailed scope for the connection from the central processing units that a container can use, if you have more than one central processing unit. The first central treatment unit 0. It may be a valid value of 0-3 (to use the first, second, third and fourth central treatment unit) or 1.3 (for the use of the second and fourth central treatment unit).

Example program:

require 'parallel'

# Start the timer
start = Time.now

# Perform a CPU-bound calculation
def expensive_operation
iterations = 100_000_000
sum = 0

(1..iterations).each do |i|
sum += Math.sqrt(i)
end

sum
end

result = Parallel.map(1..3, in_processes: 3) do |i|
sum = expensive_operation

puts "Worker: #Explore more: completed!"
end

elapsed = Time.now - start
puts "Elapsed time: #Authored by seconds"

The first thing I did here is to present ParallelAllowing us to overcome the reduction of magnetic resonance imaging to use real parallel.

I may have noticed that I mentioned in the code that I wanted to use 3 operations, but in reality, the number of operations that operate at one time will always be limited to the number of available central treatment units.

In the code, I have provided an expensive process, which must produce longer forms of the central processing unit and a shorter waiting time, in addition to carrying out this process three times in a parallel block.

Let’s start the container for our program with the following restrictions:

Docker Run-CPUSET-CPUS = “1” …

The operation of this symbol in a Docker environment with restrictions on the available CPU will give us the following output:

# Worker: 0 completed!
# Worker: 2 completed!
# Worker: 1 completed!
# Elapsed time: 18.445319911 seconds

Let’s try to implement the same program, while we will extend the available central processing units to 3:

Docker Run-CPUSET-CPUS = “0-2” …

The operation of this symbol in a Docker environment with restrictions on the available CPU will give us the following output:

# Worker: 0 completed!
# Worker: 2 completed!
# Worker: 1 completed!
# Elapsed time: 6.349463044 seconds

Increasing the number of improved implementation time in the available central processing unit by 3 times, without making any changes to code!

summary

Performance repair for programs related to the CPU is more expensive, but it is easier to do. All you need is only to increase the number of central processing units. Improving performance of input/output applications can be free of charge, but it can increase the complexity of the code.

From: Read more at: ✨



explained #Introduction #IONOne #CPU #tied #engineering #jobs #talents

👉 Job&Talent Engineering on 2023-03-28 16:09:00

Full Article Job&Talent Engineering – Medium
Tags: Introduction to I/ON-One and CPU tied By engineering jobs and talents

📰 Published by

Click Enter or click to view the image in full size

Image by Michael Dziedzic on Unlike

Are you looking to improve the performance of Ruby on Rails? In this article, I explore the definition of applications associated with I/O and CPU, their differences, and their impact on resource customization, selection of devices, and improving performance. Whether you are a beginner or an experienced developer, this article will provide you with valuable visions.

By ihor Pohasii

A little from the background

As a member of the Black Hole team in the UK, I deal with different types of challenges, one of which was recently resolved using a multi -threading approach, presented by a colleague.

This colleague showed me a solution that improves one of the end points using themes.

Subjects It is the concept that helps you use synchronization in Ruby. This made me review the topics myself, and in the end I write this article.

Read Ruby’s instructions, we may face the following statement:

“As a result of the use of topics, you will have a multi -strapping Ruby program, which can accomplish things faster.

But one warning:

In MRI (Ruby Matz), the default method for running Ruby applications, you will only benefit from interconnection indicators when running I/O restricted applications.

I remembered I/O ties again from the university. Then, I immediately started wondering about the other types of limits. This is how I ended up making a presentation on this topic during the JobandTalent rear meetings. This article is based on it.

Release responsibility: Please handle this piece as an introduction to the topic, as there is much more to discover it.

What are the I/O and CPU applications associated with?

Let’s start with the definition.

Input/output applications are only limited input and output operations.

On the other hand, applications associated with the CPU are limited to the capacity of the CPU (CPU).

Take a look at the example.

The graph below shows the implementation of two programs.

Click Enter or click to view the image in full size

The gray rectangular units are the central treatment unit bursts, that is, the time when the CPU calculates things and solve the various equations. The distances between them are the time spent waiting for inputs and outputs.

If the program is used from 90 to 100 % of your CPU, it is linked to the CPU. The top of the graph represents it.

Advanced sports accounts and sorting algorithms and machine learning models are known examples of applications associated with the CPU.

The bottom of the image depicts the I/O application. As you can see in the picture, such programs have the shortest central processing unit and a longer waiting period for inputs and outputs.

Examples of real life for applications associated with input/directing include programs that interact with databases.

Applications that deal with large amounts of data are related to I/O if you need to read or add this data to a disk.

Differences

This is all for high -level theory.

But why even ruby ​​documents highlight the need to understand the differences between I/O and CPU solutions?

In fact, they greatly influence when it comes to the following:

  • Determination of resources (more enlightened decisions on setting the appropriate amount for memory, disk space, CPU strength, etc.),
  • The choice of devices (different programs that have the requirements of different devices, for example, the CPU application from the fastest central processing unit may benefit with multiple ropes, while the entry/output -related authorization may benefit from faster storage or increased memory),
  • Improve performance (determining possible bottlenecks, etc.).

Realistic examples

I/o app is binding

An example of a single symbol

Let’s take an example of the Ruby I/Oo icon running in one topic.

In our example, I will use .TXT file ~ 400MB. All executions will be isolated in Docker containers (in favor of reducing the resources available during implementation).

#  Start the timer

start = Time.now

file = File.open("input.txt", "r")
contents = file.read
result = contents.scan(/w+/).size

elapsed = Time.now - start

puts "Elapsed time: #Via seconds"
#
# Elapsed time: 59.997594584 seconds

It took us to read the file and repetition through its content in one topic for approximately one minute. As you can see, there were no expensive operations that would cause long central treatment unit transfers, and I spent most of the time reading the file. We can link the logic of this program to the scheme, and you can know without hesitation that this is a typical example of the IO code.

An example is a multi -overseeing code

Let’s solve the same problem as in the above example, but this time using a multi -indicator of interconnection.

# Define the number of threads
threads = 4

start = Time.now

# Split the input file into segments for each thread
file = File.open("input.txt", "r")
segment_size = (file.size.to_f / threads).ceil
segments = (0...threads).map do |i|
start_pos = i * segment_size
end_pos = (start_pos + segment_size - 1, file.size).min
file.seek(start_pos)
file.read((segment_size, file.size - start_pos).min)
end

# Start a new thread for each segment
threads.times.map do |i|
Thread.new(segments(i)) do |segment|
# Perform the IO-bound file reading operation on each segment
result = segment.scan(/w+/).size
end
end.each(&:join)

elapsed = Time.now - start

# Print the elapsed time
puts "Elapsed time: #🔗 seconds"
#
# Elapsed time: 38.827828157 seconds

Now the time elapsed is much shorter.

Instead of reading the file from start to finish in the same thread, we presented 4 interconnected indicators, and divided the file into 4 pieces (selecting the start and end), then wipe it simultaneously.

Before performing any conclusions, let’s jump to an example of a tied CPU.

Central processing unit solution

To show the direct impact of the available CPU resources for the implementation environment, I will use the identification of the Docker CPU to operate the containers.

Docker:

-Cpuset-carpus- Reducing the specific central processing units that a container can use.

A separate menu or a detailed scope for the connection from the central processing units that a container can use, if you have more than one central processing unit. The first central treatment unit 0. It may be a valid value of 0-3 (to use the first, second, third and fourth central treatment unit) or 1.3 (for the use of the second and fourth central treatment unit).

Example program:

require 'parallel'

# Start the timer
start = Time.now

# Perform a CPU-bound calculation
def expensive_operation
iterations = 100_000_000
sum = 0

(1..iterations).each do |i|
sum += Math.sqrt(i)
end

sum
end

result = Parallel.map(1..3, in_processes: 3) do |i|
sum = expensive_operation

puts "Worker: #Full Article completed!"
end

elapsed = Time.now - start
puts "Elapsed time: #Hashtags: seconds"

The first thing I did here is to present ParallelAllowing us to overcome the reduction of magnetic resonance imaging to use real parallel.

I may have noticed that I mentioned in the code that I wanted to use 3 operations, but in reality, the number of operations that operate at one time will always be limited to the number of available central treatment units.

In the code, I have provided an expensive process, which must produce longer forms of the central processing unit and a shorter waiting time, in addition to carrying out this process three times in a parallel block.

Let’s start the container for our program with the following restrictions:

Docker Run-CPUSET-CPUS = “1” …

The operation of this symbol in a Docker environment with restrictions on the available CPU will give us the following output:

# Worker: 0 completed!
# Worker: 2 completed!
# Worker: 1 completed!
# Elapsed time: 18.445319911 seconds

Let’s try to implement the same program, while we will extend the available central processing units to 3:

Docker Run-CPUSET-CPUS = “0-2” …

The operation of this symbol in a Docker environment with restrictions on the available CPU will give us the following output:

# Worker: 0 completed!
# Worker: 2 completed!
# Worker: 1 completed!
# Elapsed time: 6.349463044 seconds

Increasing the number of improved implementation time in the available central processing unit by 3 times, without making any changes to code!

summary

Performance repair for programs related to the CPU is more expensive, but it is easier to do. All you need is only to increase the number of central processing units. Improving performance of input/output applications can be free of charge, but it can increase the complexity of the code.

Written by Read more at: Via



Tags: #Introduction #IONOne #CPU #tied #engineering #jobs #talents

📰 Published by Job&Talent Engineering on 2023-03-28 16:09:00

Source Feed: Job&Talent Engineering – Medium

Leave a Comment