Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

DellEMC PowerEdge 14G Servers certified for VMware ESXi

$
0
0
This blog is written by Murugan Sekar & Revathi from Dell Hypervisor Engineering Team. DellEMC has introduced next generation (14G) of PowerEdge servers which support Intel Xeon Processor Scalable Family ( Skylake-SP ). This blog highlights 14G...(read more)

Which OpenManage solution is best for you? Ask the Dell Systems Management Advisor.

$
0
0

Dell provides a vast array of Enterprise Systems Management solutions for many IT different needs and use cases.  With so many useful options available, sometimes it's not immediately obvious which Dell OpenManage solution will work best for you based on your environment and requirements whether you need to  deploy, update, monitor, or maintain systems.  

Luckily, Dell recently created an advisor tool that will recommend which OpenManage products will work best for you based on (but not limited to) factors in your environment such as:

  • Functionality you wish to implement (Monitor / Manage / Deploy / Maintain)
  • Size and brand mix of your server environment
  • Features of your Dell servers
  • Mix of physical vs virtual hosts
  • Current Dell and 3rd party systems manage tools you utilize
  • Need for 1:1 or 1 to many tools

 

Once you complete a short questionnaire, the advisor will suggest the OpenManage tools that best suit your needs and provide useful information and links so that you can learn more.

Dell Systems Management Advisor

Other than the Systems Management Advisor, Dell TechCenter provides a wealth of information if you would like to evaluate our Systems Management technologies.  Please visit our additional Dell OpenManage links:

 
 

Simplified Rack Scale Management

$
0
0
Ed Bailey – Distinguished Eng., ESI Architecture, Dell EMC “What if you could manage more devices, with fewer interfaces?” In today’s large scale environments you are always managing more and more devices – more compute...(read more)

Discovering the iDRAC IP via PowerShell

$
0
0
This blog is originally written by Teixeira Fabiano Greetings, The DellEMC iDRAC (Integrated Dell Remote Access Controller) is a great feature available for a long time on the DellEMC; the 9 th generation was just delivered on our new 14G. How great...(read more)

Extreme Scale Infrastructure (ESI) Architects' POV Series (REDFISH)

$
0
0
Paul Vancil – Distinguished Eng., ESI Architecture, Dell EMC In today’s typical large scale data center the numbers of servers that organizations are trying to keep track of have reached enormous proportions. In addition to that, existing...(read more)

Rack Scale & Ready: DSS 9000

$
0
0
Author : Robert Tung, Sr. Consultant, Product Management, Extreme Scale Infrastructure This month, Dell EMC Extreme Scale Infrastructure (ESI) group is releasing the latest version of the DSS 9000 – an open, agile and efficient rack scale infrastructure...(read more)

Linux Multipathing

$
0
0

Author: Steven Lemons

What exactly is Linux multipathing? It’s the process used to configure multiple I/O access paths from Dell™ PowerEdge™ Servers to the Dell EMC™ SC Series storage block devices.

When thinking about how best to access these SC Series block devices from a Linux host, there are generally two configuration options available when needing the fail-over and load-balancing benefits of multipathing:

  • Dell EMC PowerPath™
  • Native Device Mapper Multipathing (DM-Multipath)

Much like gaming, each of us prefers a specific character configuration just as each Linux admin has their preferred method of multipathing configuration, this blog calls out PowerPath with its support for SC Series storage in addition to highlighting some of the native DM-Multipathing configuration attributes that might be taken at “default” value (you seasoned Linux admins will get that joke). While highlighting these additional configuration attributes, which might enrich your DM-Multipath journey (let’s face it, the wrong choice here is a headache generator), we’d also like to present some simple BASH functions for inclusion within your administrative tool belt.

The scenarios discussed in this blog adhere to the following Dell EMC recommended best practices. If you’re already familiar with our online resources, give yourself +10 Wisdom.

Note: All referenced testing or scripting was completed on RHEL 6.9, RHEL 7.3 & SLES 12 SP2.

Dell EMC PowerPath

PowerPath supports SC Series storage. Go ahead, take a moment to let that sink in.

YES! This exciting announcement was made at Dell EMC World 2016, in the live music capital of the world – Austin, TX.

So, let’s see here….

  • PowerPath supports the following Operating Systems (OSes): AIX®, HP-UX, Linux, Oracle Solaris®, Windows®, and vSphere ESXi®.
  • PowerPath supports many Dell EMC Storage Arrays, including Dell EMC Unity, SC Series, and Dell EMC VPLEX
  • PowerPath provides a single management interface across expanding physical and virtual environments, easing the learning curve
  • PowerPath provides a single configuration script for dynamic LUN scanning for addition or removal from host (/etc/opt/emcpower/emcplun_linux – this is a huge time saver vs cli crafting a bunch of for loops with echo statements when needing to scan SAN changes at the Linux host level)
  • PowerPath provides built-in performance monitoring (disabled by default) for all of its managed devices

The below screenshot of the powermt (PowerPath Management Utility) output highlights the attached SC Series storage array (red box) and the policy in use for the configured I/O paths (blue box) of the attached LUNs.

Since we’re talking about multipathing, I’d be remiss if I did not cover PowerPath’s configuration ability to provide load balancing and failover policies specific to your individual storage array. A licensed version of PowerPath will default to the proprietary Adaptive load balancing and failover policy that assigns I/O requests based on an algorithmic decision of path load and logical device priority. The below screenshot of the powermt man page shows some of these available policies.

If you haven’t taken the time to introduce PowerPath into your environment for multipathing needs, hopefully this blog sparked an interest that will get you in front of this outstanding multipathing management utility. You can give PowerPath a try, free for a 45-day trial, by reviewing PowerPath Downloads.

Native Device Mapper Multipathing (DM-MPIO)

Do you spend hours kludging that perfect /etc/multipath.conf file with your bash one-liners? For those keeping score, give yourself +10 Experience.

Configuring DM-MPIO natively on Linux, whether it’s RHEL 6.3, RHEL 7.3 or SLES 12 SP2, follows a programmatic approach by first finding those WWIDs being presented to your host from your SC Series array and then applying various configuration attributes on top of the multipath device (/dev/mapper/SC_VOL_01, for example) being created.

Speaking of finding WWIDs, the below find_sc_wwids function (available for your .bashrc consideration) should help and increase your score by +5 Ability.

find_sc_wwids () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{print $7}'`; do /usr/bin/echo -n "$x:";/usr/lib/udev/scsi_id -g -u $x; done | /usr/bin/sort -t":" -k2
}

Note: All .bashrc functions provided in this blog post were written and tested on RHEL 6.3, RHEL 7.3 & SLES 12 SP2.

While there are configurable attributes within the mulitpaths section for each specified device (alias, uid, gid, mode, etc.) of the configuration file, there are some attributes within the defaults section that may need adjusting according to the performance requirements of your project. The next couple of sections focus on these attributes while providing some coding examples to help with your unique implementation.

Default value for path_selector

Configuring DM-MPIO on RHEL 6.3, RHEL 7.3 or SLES 12 SP2 changes the default path_selector algorithm from “round-robin 0” to the more latency sensitive “service-time 0”. Performance gains can be obtained by moving to “service-time 0” due to its ability to monitor the latency of configured I/O paths and load balance accordingly. This is a much more efficient I/O path selector service than the prior round-robin method which simply balanced the load between all active paths, regardless of latency impacting those active paths.

Multipath device configuration attributes

Now that it’s time to start digging into the weeds of those configurable attributes of each multipath device being created, save yourself some cli kludging with the following dm_iosched_values function (available for your .bashrc consideration) and increase your score by +5 Ability.

dm_iosched_values () {
for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
do
  printf "%b\n" "Config files for /sys/block/$x/queue/"
  for q in `/usr/bin/find /sys/block/$x/queue/ -type f`;
  do
    printf "%b" "\t$x/queue$q:"
    /usr/bin/cat "$q"
  done
done
}

The above dm_iosched_values function will give you a quick reference to all configurable attributes, as shown in the below screenshot, which should help with configuring the host for optimal DM-MPIO performance.

max_sectors_kb

Depending on performance requirements of your project, the “max_sectors_kb” value of the host configured DM-MPIO devices may need to be changed for application requirements. The below .bashrc functions can help with both discovery and changing this value (available for your .bashrc consideration) and increasing your score with +5 Ability.

sc_max_sectors_kb () {
  for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb ->"; /usr/bin/cat /sys/block/$x/queue/max_sectors_kb;done
}

set_512_MaxSectorsKB () {
for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
do
  printf "%b" "$x:\tExisting max_sectors_kb:"
  /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
  /usr/bin/echo 512 > /sys/block/$x/queue/max_sectors_kb
  printf "%b" "\tNew max_sectors_kb:"
  /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
done
}

set_2048_MaxSectorsKB () {
  for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo 2048 > /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo -n "$x:New max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; done
}

When SC Series arrays present LUNs to the host, this value is automatically set based on what the SC array. For example, if the SC series had a 512k page pool, the host would set the “max_sectors_kb” value to 512k, the same for a 2MB page pool which the host would assign a “max_sectors_kb” value of 2048k.

queue_depth

Queue depth is one of those configuration values that must be tested and configured to achieve optimal performance within your SAN environment. The best way to achieve this optimal performance is by evaluating your SAN environment, testing the evaluation and then adjusting the values to meet your performance requirements. The below .bashrc functions can help in both discovery and changing this queue_depth value (available for your .bashrc consideration):

sc_queue_depths () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth -> ";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

set_64_queue_depth () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 64 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

set_32_queue_depth () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 32 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

Additional information

I know, this post almost qualified for an entire tl;dr. (Too Long, Didn’t Read) opt-out, yet we’re at the end. Please give yourself, +10 Stamina points.

With this post highlighting the PowerPath multipathing management utility while also touching on some of those unique configuration scenarios when working with native DM-MPIO, hopefully you are now thinking about your own multipathing policies in place and how they could be tuned up or replaced, altogether.

Dell EMC PowerPath

SUSE Storage Administration Guide

Red Hat Enterprise Linux 6 – DM Multipath

Red Hat Enterprise Linux 7 – DM Multipath

Designed for OPEX Savings: DSS 9000

$
0
0
Stephen Rousset – Distinguished Eng., ESI Director of Architecture, Dell EMC Designed for OPEX Savings: DSS 9000 Design is a funny word. Some people think design means how it looks. But of course, if you dig deeper, it’s really how it...(read more)

Configuring NVDIMM inside a VM on Windows Server 1709

$
0
0
This blog is originally written by Teixeira Fabiano Howdy, Have you heard about NVDIMM-N? NVDIMM-N is a very nice feature available on Dell EMC 14G platform (click here for more details). NVDIMM-N (also known as Persistent Memory [PMEM]) is a...(read more)

Dell EMC SC5020: How to get even more out of this flexible, affordable storage array

$
0
0
Written by Marty Glaser The Dell EMC SC5020 array is a next-generation storage array that expands on the capabilities of our most popular SC Series model ever, the SC4020. Now we’re expanding on the number of technical resources to help you get...(read more)

Network Automation with Dell EMC OS9, OS10, Open Switch OPX and Ansible

$
0
0
This blog describes the Open Switch OPX network automation demo based on the Ansible framework, as delivered by Dell EMC at AnsibleFest2017. The details of the demo including videos, configuration playbooks and other details are available here . Demo...(read more)

Building the Optimal Machine Learning Platform

$
0
0
Austin Shelnutt - Distinguished Eng., ESI Architecture, Dell EMC While various forms of machine learning have existed for several decades, the past few years of development have yielded some extraordinary progress in democratizing the capabilities...(read more)

Dell PowerEdge VRTX support for VMware ESXi 6.5

$
0
0
This blog post is written by Thiru Navukkarasu and Krishnaprasad K from Dell Hypervisor Engineering. Dell PowerEdge VRTX was not supported for VMware ESXi 6.5 branch of ESXi thus far. Dell announced support for VRTX from Dell customized version of...(read more)

DellEMC firmware catalogs for VMware ESXi

$
0
0
What is a firmware catalog? A firmware catalog is an aggregation of all firmware bundles that is supported on multiple generations of DellEMC servers. Firmware catalogs for DellEMC customized VMware ESXi DellEMC started releasing firmware catalogs...(read more)

VMware ESXi CLI Support for DellEMC BOSS-S1

$
0
0
Introduction to BOSS DellEMC PowerEdge Boot Optimized Storage Solution aka BOSS-S1 is a new boot device introduced by Dell for use on the 14 th generation of PowerEdge servers. The BOSS-S1 supports two M.2 disks which can either configured as a RAID...(read more)

Some Tips for Running R640/R740/R940 PowerEdge Servers - Part 1 Hardware Configurations

$
0
0
Today I am going to talk about the hardware configurations on R740. It also applies to R640 and R940. Let’s assume we have a server with many hardware available, RDIMMs, NVDIMMs, BOSS, HBA, PERC, NVMe hard drive, embedded on board NIC, PCIe NIC...(read more)

Some Tips for Running R640/R740/R940 PowerEdge Servers - Part 2 Heat Dissipation

$
0
0
Based on the previous discussion in Part 1 Hardware Configurations, I am going to extend the discussion to lowering the heat dissipation and the noise. The memories and PCIe devices all generate lots of heat even though they are in idle state. iDRAC controls...(read more)

Some Tips for Running R640/R740/R940 PowerEdge Servers - Part 3 Software

$
0
0
Now you have set up all you hardware. Let’s install an OS. SUSE and Red Hat are both officially supported if you would like to purchase them and their service packages. If you would like to test out the latest features on the latest kernel, Fedora...(read more)

Rack Scale & Ready: DSS 9000

$
0
0
Author : Robert Tung, Sr. Consultant, Product Management, Extreme Scale Infrastructure This month, Dell EMC Extreme Scale Infrastructure (ESI) group is releasing the latest version of the DSS 9000 – an open, agile and efficient rack scale infrastructure...(read more)

Linux Multipathing

$
0
0

Author: Steven Lemons

What exactly is Linux multipathing? It’s the process used to configure multiple I/O access paths from Dell™ PowerEdge™ Servers to the Dell EMC™ SC Series storage block devices.

When thinking about how best to access these SC Series block devices from a Linux host, there are generally two configuration options available when needing the fail-over and load-balancing benefits of multipathing:

  • Dell EMC PowerPath™
  • Native Device Mapper Multipathing (DM-Multipath)

Much like gaming, each of us prefers a specific character configuration just as each Linux admin has their preferred method of multipathing configuration, this blog calls out PowerPath with its support for SC Series storage in addition to highlighting some of the native DM-Multipathing configuration attributes that might be taken at “default” value (you seasoned Linux admins will get that joke). While highlighting these additional configuration attributes, which might enrich your DM-Multipath journey (let’s face it, the wrong choice here is a headache generator), we’d also like to present some simple BASH functions for inclusion within your administrative tool belt.

The scenarios discussed in this blog adhere to the following Dell EMC recommended best practices. If you’re already familiar with our online resources, give yourself +10 Wisdom.

Note: All referenced testing or scripting was completed on RHEL 6.9, RHEL 7.3 & SLES 12 SP2.

Dell EMC PowerPath

PowerPath supports SC Series storage. Go ahead, take a moment to let that sink in.

YES! This exciting announcement was made at Dell EMC World 2016, in the live music capital of the world – Austin, TX.

So, let’s see here….

  • PowerPath supports the following Operating Systems (OSes): AIX®, HP-UX, Linux, Oracle Solaris®, Windows®, and vSphere ESXi®.
  • PowerPath supports many Dell EMC Storage Arrays, including Dell EMC Unity, SC Series, and Dell EMC VPLEX
  • PowerPath provides a single management interface across expanding physical and virtual environments, easing the learning curve
  • PowerPath provides a single configuration script for dynamic LUN scanning for addition or removal from host (/etc/opt/emcpower/emcplun_linux – this is a huge time saver vs cli crafting a bunch of for loops with echo statements when needing to scan SAN changes at the Linux host level)
  • PowerPath provides built-in performance monitoring (disabled by default) for all of its managed devices

The below screenshot of the powermt (PowerPath Management Utility) output highlights the attached SC Series storage array (red box) and the policy in use for the configured I/O paths (blue box) of the attached LUNs.

Since we’re talking about multipathing, I’d be remiss if I did not cover PowerPath’s configuration ability to provide load balancing and failover policies specific to your individual storage array. A licensed version of PowerPath will default to the proprietary Adaptive load balancing and failover policy that assigns I/O requests based on an algorithmic decision of path load and logical device priority. The below screenshot of the powermt man page shows some of these available policies.

If you haven’t taken the time to introduce PowerPath into your environment for multipathing needs, hopefully this blog sparked an interest that will get you in front of this outstanding multipathing management utility. You can give PowerPath a try, free for a 45-day trial, by reviewing PowerPath Downloads.

Native Device Mapper Multipathing (DM-MPIO)

Do you spend hours kludging that perfect /etc/multipath.conf file with your bash one-liners? For those keeping score, give yourself +10 Experience.

Configuring DM-MPIO natively on Linux, whether it’s RHEL 6.3, RHEL 7.3 or SLES 12 SP2, follows a programmatic approach by first finding those WWIDs being presented to your host from your SC Series array and then applying various configuration attributes on top of the multipath device (/dev/mapper/SC_VOL_01, for example) being created.

Speaking of finding WWIDs, the below find_sc_wwids function (available for your .bashrc consideration) should help and increase your score by +5 Ability.

find_sc_wwids () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{print $7}'`; do /usr/bin/echo -n "$x:";/usr/lib/udev/scsi_id -g -u $x; done | /usr/bin/sort -t":" -k2
}

Note: All .bashrc functions provided in this blog post were written and tested on RHEL 6.3, RHEL 7.3 & SLES 12 SP2.

While there are configurable attributes within the mulitpaths section for each specified device (alias, uid, gid, mode, etc.) of the configuration file, there are some attributes within the defaults section that may need adjusting according to the performance requirements of your project. The next couple of sections focus on these attributes while providing some coding examples to help with your unique implementation.

Default value for path_selector

Configuring DM-MPIO on RHEL 6.3, RHEL 7.3 or SLES 12 SP2 changes the default path_selector algorithm from “round-robin 0” to the more latency sensitive “service-time 0”. Performance gains can be obtained by moving to “service-time 0” due to its ability to monitor the latency of configured I/O paths and load balance accordingly. This is a much more efficient I/O path selector service than the prior round-robin method which simply balanced the load between all active paths, regardless of latency impacting those active paths.

Multipath device configuration attributes

Now that it’s time to start digging into the weeds of those configurable attributes of each multipath device being created, save yourself some cli kludging with the following dm_iosched_values function (available for your .bashrc consideration) and increase your score by +5 Ability.

dm_iosched_values () {
for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
do
  printf "%b\n" "Config files for /sys/block/$x/queue/"
  for q in `/usr/bin/find /sys/block/$x/queue/ -type f`;
  do
    printf "%b" "\t$x/queue$q:"
    /usr/bin/cat "$q"
  done
done
}

The above dm_iosched_values function will give you a quick reference to all configurable attributes, as shown in the below screenshot, which should help with configuring the host for optimal DM-MPIO performance.

max_sectors_kb

Depending on performance requirements of your project, the “max_sectors_kb” value of the host configured DM-MPIO devices may need to be changed for application requirements. The below .bashrc functions can help with both discovery and changing this value (available for your .bashrc consideration) and increasing your score with +5 Ability.

sc_max_sectors_kb () {
  for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb ->"; /usr/bin/cat /sys/block/$x/queue/max_sectors_kb;done
}

set_512_MaxSectorsKB () {
for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
do
  printf "%b" "$x:\tExisting max_sectors_kb:"
  /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
  /usr/bin/echo 512 > /sys/block/$x/queue/max_sectors_kb
  printf "%b" "\tNew max_sectors_kb:"
  /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
done
}

set_2048_MaxSectorsKB () {
  for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo 2048 > /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo -n "$x:New max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; done
}

When SC Series arrays present LUNs to the host, this value is automatically set based on what the SC array. For example, if the SC series had a 512k page pool, the host would set the “max_sectors_kb” value to 512k, the same for a 2MB page pool which the host would assign a “max_sectors_kb” value of 2048k.

queue_depth

Queue depth is one of those configuration values that must be tested and configured to achieve optimal performance within your SAN environment. The best way to achieve this optimal performance is by evaluating your SAN environment, testing the evaluation and then adjusting the values to meet your performance requirements. The below .bashrc functions can help in both discovery and changing this queue_depth value (available for your .bashrc consideration):

sc_queue_depths () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth -> ";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

set_64_queue_depth () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 64 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

set_32_queue_depth () {
  for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 32 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
}

Additional information

I know, this post almost qualified for an entire tl;dr. (Too Long, Didn’t Read) opt-out, yet we’re at the end. Please give yourself, +10 Stamina points.

With this post highlighting the PowerPath multipathing management utility while also touching on some of those unique configuration scenarios when working with native DM-MPIO, hopefully you are now thinking about your own multipathing policies in place and how they could be tuned up or replaced, altogether.

Dell EMC PowerPath

SUSE Storage Administration Guide

Red Hat Enterprise Linux 6 – DM Multipath

Red Hat Enterprise Linux 7 – DM Multipath

Viewing all 1001 articles
Browse latest View live




Latest Images