Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


    0 0

    iDRAC6, the Dell Remote Access Controller in the 11 th generation of PowerEdge Servers support the protocols TLS version 1.0, TLS version 1.1 and TLS version 1.2 (cryptographic protocols designed to provide communications security over a computer network...(read more)

    0 0
  • 08/03/17--11:16: Dell EMC Unity 4.2 Release
  • Blog author: Chuck Armstrong, Dell EMC Storage Engineering Dell EMC ™ has just released a new Dell EMC Unity All-Flash portfolio: the 350F, 450F, 550F, and 650F. These new all-flash arrays are based on the latest Intel ® Broadwell...(read more)

    0 0

    This blog is written by Murugan Sekar & Revathi from Dell Hypervisor Engineering Team. DellEMC has introduced next generation (14G) of PowerEdge servers which support Intel Xeon Processor Scalable Family ( Skylake-SP ). This blog highlights 14G...(read more)

    0 0

    Dell provides a vast array of Enterprise Systems Management solutions for many IT different needs and use cases.  With so many useful options available, sometimes it's not immediately obvious which Dell OpenManage solution will work best for you based on your environment and requirements whether you need to  deploy, update, monitor, or maintain systems.  

    Luckily, Dell recently created an advisor tool that will recommend which OpenManage products will work best for you based on (but not limited to) factors in your environment such as:

    • Functionality you wish to implement (Monitor / Manage / Deploy / Maintain)
    • Size and brand mix of your server environment
    • Features of your Dell servers
    • Mix of physical vs virtual hosts
    • Current Dell and 3rd party systems manage tools you utilize
    • Need for 1:1 or 1 to many tools

     

    Once you complete a short questionnaire, the advisor will suggest the OpenManage tools that best suit your needs and provide useful information and links so that you can learn more.

    Dell Systems Management Advisor

    Other than the Systems Management Advisor, Dell TechCenter provides a wealth of information if you would like to evaluate our Systems Management technologies.  Please visit our additional Dell OpenManage links:

     
     

    0 0

    Ed Bailey – Distinguished Eng., ESI Architecture, Dell EMC “What if you could manage more devices, with fewer interfaces?” In today’s large scale environments you are always managing more and more devices – more compute...(read more)

    0 0

    This blog is originally written by Teixeira Fabiano Greetings, The DellEMC iDRAC (Integrated Dell Remote Access Controller) is a great feature available for a long time on the DellEMC; the 9 th generation was just delivered on our new 14G. How great...(read more)

    0 0

    Paul Vancil – Distinguished Eng., ESI Architecture, Dell EMC In today’s typical large scale data center the numbers of servers that organizations are trying to keep track of have reached enormous proportions. In addition to that, existing...(read more)

    0 0
  • 09/11/17--06:36: Rack Scale & Ready: DSS 9000
  • Author : Robert Tung, Sr. Consultant, Product Management, Extreme Scale Infrastructure This month, Dell EMC Extreme Scale Infrastructure (ESI) group is releasing the latest version of the DSS 9000 – an open, agile and efficient rack scale infrastructure...(read more)

    0 0
  • 10/02/17--11:09: Linux Multipathing
  • Author: Steven Lemons

    What exactly is Linux multipathing? It’s the process used to configure multiple I/O access paths from Dell™ PowerEdge™ Servers to the Dell EMC™ SC Series storage block devices.

    When thinking about how best to access these SC Series block devices from a Linux host, there are generally two configuration options available when needing the fail-over and load-balancing benefits of multipathing:

    • Dell EMC PowerPath™
    • Native Device Mapper Multipathing (DM-Multipath)

    Much like gaming, each of us prefers a specific character configuration just as each Linux admin has their preferred method of multipathing configuration, this blog calls out PowerPath with its support for SC Series storage in addition to highlighting some of the native DM-Multipathing configuration attributes that might be taken at “default” value (you seasoned Linux admins will get that joke). While highlighting these additional configuration attributes, which might enrich your DM-Multipath journey (let’s face it, the wrong choice here is a headache generator), we’d also like to present some simple BASH functions for inclusion within your administrative tool belt.

    The scenarios discussed in this blog adhere to the following Dell EMC recommended best practices. If you’re already familiar with our online resources, give yourself +10 Wisdom.

    Note: All referenced testing or scripting was completed on RHEL 6.9, RHEL 7.3 & SLES 12 SP2.

    Dell EMC PowerPath

    PowerPath supports SC Series storage. Go ahead, take a moment to let that sink in.

    YES! This exciting announcement was made at Dell EMC World 2016, in the live music capital of the world – Austin, TX.

    So, let’s see here….

    • PowerPath supports the following Operating Systems (OSes): AIX®, HP-UX, Linux, Oracle Solaris®, Windows®, and vSphere ESXi®.
    • PowerPath supports many Dell EMC Storage Arrays, including Dell EMC Unity, SC Series, and Dell EMC VPLEX
    • PowerPath provides a single management interface across expanding physical and virtual environments, easing the learning curve
    • PowerPath provides a single configuration script for dynamic LUN scanning for addition or removal from host (/etc/opt/emcpower/emcplun_linux – this is a huge time saver vs cli crafting a bunch of for loops with echo statements when needing to scan SAN changes at the Linux host level)
    • PowerPath provides built-in performance monitoring (disabled by default) for all of its managed devices

    The below screenshot of the powermt (PowerPath Management Utility) output highlights the attached SC Series storage array (red box) and the policy in use for the configured I/O paths (blue box) of the attached LUNs.

    Since we’re talking about multipathing, I’d be remiss if I did not cover PowerPath’s configuration ability to provide load balancing and failover policies specific to your individual storage array. A licensed version of PowerPath will default to the proprietary Adaptive load balancing and failover policy that assigns I/O requests based on an algorithmic decision of path load and logical device priority. The below screenshot of the powermt man page shows some of these available policies.

    If you haven’t taken the time to introduce PowerPath into your environment for multipathing needs, hopefully this blog sparked an interest that will get you in front of this outstanding multipathing management utility. You can give PowerPath a try, free for a 45-day trial, by reviewing PowerPath Downloads.

    Native Device Mapper Multipathing (DM-MPIO)

    Do you spend hours kludging that perfect /etc/multipath.conf file with your bash one-liners? For those keeping score, give yourself +10 Experience.

    Configuring DM-MPIO natively on Linux, whether it’s RHEL 6.3, RHEL 7.3 or SLES 12 SP2, follows a programmatic approach by first finding those WWIDs being presented to your host from your SC Series array and then applying various configuration attributes on top of the multipath device (/dev/mapper/SC_VOL_01, for example) being created.

    Speaking of finding WWIDs, the below find_sc_wwids function (available for your .bashrc consideration) should help and increase your score by +5 Ability.

    find_sc_wwids () {
      for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{print $7}'`; do /usr/bin/echo -n "$x:";/usr/lib/udev/scsi_id -g -u $x; done | /usr/bin/sort -t":" -k2
    }

    Note: All .bashrc functions provided in this blog post were written and tested on RHEL 6.3, RHEL 7.3 & SLES 12 SP2.

    While there are configurable attributes within the mulitpaths section for each specified device (alias, uid, gid, mode, etc.) of the configuration file, there are some attributes within the defaults section that may need adjusting according to the performance requirements of your project. The next couple of sections focus on these attributes while providing some coding examples to help with your unique implementation.

    Default value for path_selector

    Configuring DM-MPIO on RHEL 6.3, RHEL 7.3 or SLES 12 SP2 changes the default path_selector algorithm from “round-robin 0” to the more latency sensitive “service-time 0”. Performance gains can be obtained by moving to “service-time 0” due to its ability to monitor the latency of configured I/O paths and load balance accordingly. This is a much more efficient I/O path selector service than the prior round-robin method which simply balanced the load between all active paths, regardless of latency impacting those active paths.

    Multipath device configuration attributes

    Now that it’s time to start digging into the weeds of those configurable attributes of each multipath device being created, save yourself some cli kludging with the following dm_iosched_values function (available for your .bashrc consideration) and increase your score by +5 Ability.

    dm_iosched_values () {
    for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
    do
      printf "%b\n" "Config files for /sys/block/$x/queue/"
      for q in `/usr/bin/find /sys/block/$x/queue/ -type f`;
      do
        printf "%b" "\t$x/queue$q:"
        /usr/bin/cat "$q"
      done
    done
    }

    The above dm_iosched_values function will give you a quick reference to all configurable attributes, as shown in the below screenshot, which should help with configuring the host for optimal DM-MPIO performance.

    max_sectors_kb

    Depending on performance requirements of your project, the “max_sectors_kb” value of the host configured DM-MPIO devices may need to be changed for application requirements. The below .bashrc functions can help with both discovery and changing this value (available for your .bashrc consideration) and increasing your score with +5 Ability.

    sc_max_sectors_kb () {
      for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb ->"; /usr/bin/cat /sys/block/$x/queue/max_sectors_kb;done
    }

    set_512_MaxSectorsKB () {
    for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`;
    do
      printf "%b" "$x:\tExisting max_sectors_kb:"
      /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
      /usr/bin/echo 512 > /sys/block/$x/queue/max_sectors_kb
      printf "%b" "\tNew max_sectors_kb:"
      /usr/bin/cat /sys/block/$x/queue/max_sectors_kb
    done
    }

    set_2048_MaxSectorsKB () {
      for x in `/usr/bin/ls -al /dev/mapper/SC*|/usr/bin/awk -F' ' '{gsub(/\.\.\//,"");print $11}'`; do /usr/bin/echo -n "$x:Existing max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo 2048 > /sys/block/$x/queue/max_sectors_kb; /usr/bin/echo -n "$x:New max_sectors_kb:";/usr/bin/cat /sys/block/$x/queue/max_sectors_kb; done
    }

    When SC Series arrays present LUNs to the host, this value is automatically set based on what the SC array. For example, if the SC series had a 512k page pool, the host would set the “max_sectors_kb” value to 512k, the same for a 2MB page pool which the host would assign a “max_sectors_kb” value of 2048k.

    queue_depth

    Queue depth is one of those configuration values that must be tested and configured to achieve optimal performance within your SAN environment. The best way to achieve this optimal performance is by evaluating your SAN environment, testing the evaluation and then adjusting the values to meet your performance requirements. The below .bashrc functions can help in both discovery and changing this queue_depth value (available for your .bashrc consideration):

    sc_queue_depths () {
      for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth -> ";/usr/bin/cat /sys/block/$x/device/queue_depth; done
    }

    set_64_queue_depth () {
      for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 64 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
    }

    set_32_queue_depth () {
      for x in `/usr/bin/lsscsi|/usr/bin/egrep -i compelnt|/usr/bin/awk -F' ' '{gsub(/\/dev\//,"");print $7}'`; do /usr/bin/echo -n "$x:Existing Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; /usr/bin/echo 32 > /sys/block/$x/device/queue_depth; /usr/bin/echo -n "$x:New Queue Depth:";/usr/bin/cat /sys/block/$x/device/queue_depth; done
    }

    Additional information

    I know, this post almost qualified for an entire tl;dr. (Too Long, Didn’t Read) opt-out, yet we’re at the end. Please give yourself, +10 Stamina points.

    With this post highlighting the PowerPath multipathing management utility while also touching on some of those unique configuration scenarios when working with native DM-MPIO, hopefully you are now thinking about your own multipathing policies in place and how they could be tuned up or replaced, altogether.

    Dell EMC PowerPath

    SUSE Storage Administration Guide

    Red Hat Enterprise Linux 6 – DM Multipath

    Red Hat Enterprise Linux 7 – DM Multipath


    0 0

    Stephen Rousset – Distinguished Eng., ESI Director of Architecture, Dell EMC Designed for OPEX Savings: DSS 9000 Design is a funny word. Some people think design means how it looks. But of course, if you dig deeper, it’s really how it...(read more)