Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Meet the New SanDisk DAS Cache — The time for a new architecture of servers with solid-state storage acceleration has arrived!

$
0
0

By Rich Petersen, Director, Software Marketing, SanDisk

Today we announced a technology that will be available for the next generation of Dell PowerEdge servers: the SanDisk DAS Cache software. It’s one of the many advantages of the new Dell PowerEdge servers that you’re going to want to get to know better.

You already know that storage latency is a performance bottleneck for many of your applications, and you’ve heard how flash memory reduces storage latency. But servers with direct attached storage can’t benefit from solid-state memory deployed in external storage systems like all-flash arrays or hybrid flash SAN appliances.

Now, by complementing the disk-based storage in the server with a solid-state cache, Dell PowerEdge servers bring the performance benefits of flash memory to servers with direct-attached disk-based storage. The combination gives you the reliability and capacity of the disk-based storage, but with SSDs, which helps reduce I/O latency and dramatically raises application performance.

At SanDisk, we’ve tested the new SanDisk DAS Cache solution with a variety of workloads and performance measurements to prove that it can:

  • Reduce latency and increase transactions per minute in OLTP workloads
  • Increase throughput and reduced process time in analytics, and
  • Provide greater concurrency and performance for line-of-business applications.

In a typical test, the SanDisk DAS Cache delivered an 11x increase in performance for a standard OLTP workload.*


So how does SanDisk DAS Cache work?

The SanDisk DAS Cache software runs as a driver within the server operating system, where it identifies your applications’ “hot data,” and makes it accessible from the SSD. Because it offers both “write-back” and “write-through” caching, the software can accelerate both read and write operations.

But there’s more to the story than just increased performance. The SanDisk DAS Cache is designed to deliver flexibility, scalability and value.

As a software-based caching solution, the SanDisk DAS Cache gives you the flexibility to set up the cache in your tower, rack and blade server configurations, and you can choose the SSDs that are most appropriate to your capacity, performance and budget requirements. You can also configure the SSDs in the hardware or software RAID that works best for you, and the software simply works with the logical volume that you create from the SSDs you’ve selected and configured.

We’re also really excited that the SanDisk DAS cache can scale to support the capacity for SSD storage in the new Dell PowerEdge servers. The maximum cache capacity is 16TB, and you can create up to four separate caches in a single server, for different QoS requirements. Oh, and how could I forget to mention that with SanDisk DAS Cache software, you can also provide acceleration for up to 2048 volumes! 

How solid-state storage will save you money

Finally, one of the best reasons to take a closer look at the SanDisk DAS Cache software is to show that putting solid-state storage in a server will actually save money. By using just a small amount of solid-state storage, the cache reduces overall storage latency and increases application performance. So you continue to benefit from the economical capacity of disk-based storage, but in a system that delivers flash performance.

This solution for application acceleration means that the next Dell PowerEdge server you purchase can give your organization a giant step forward in terms of both performance and value. The time for a new architecture of servers with solid-state storage acceleration has arrived. 

*Testing by SanDisk with a standard OLTP workload on MS-SQL server showed an 11x increase in TpmC score versus the same server with only HDD storage in RAID 5 configuration. Complete test methodology and results are available at http://www.sandisk.com/assets/docs/SanDisk-DAS-Cache-OLTP-Performance-Whitepaper.pdf 

 

About the Author

Rich Petersen;Director, Software Marketing, SanDisk has twenty years of experience in marketing and product management for enterprise software technologies. Rich has focused on pioneering new solution categories, building company momentum and market traction, and fostering growth of customer, developer and partner ecosystems. At SanDisk, Rich leads the marketing effort for the company’s enterprise software products, including the FlashSoft software products. Rich’s team is responsible for product management, marketing communications and sales support programs.

 


Whitepaper on HA clustering with RHEL 6.5 and PowerEdge VRTX

Curious about VMware Virtual Volumes?

$
0
0

Written by David Glynn:

For several years we’ve been talking VMware Virtual Volumes, or VVols for short. And we’ve been saying that it is coming, coming soon. Well we are still saying that, and it is coming soon, I promise, but I can’t tell you when. However, what I can do for you today, is to let you get your hands dirty with VVols running on EqualLogic Storage. Pretty cool, right?

For VMworld 2014, we worked with the VMware Hands-on-Lab team to create a lab from which VMware customers, not just Dell Storage customers could experience VVols first hand. Folks got to experience not only how day-to-day tasks, like snapshots and cloning are accelerated by the Virtual Volume integration with Dell EqualLogic and how the work flows involved remain similar (because as cool as tech can be, no one like it when things changes for no good reason), but they also got to perform a number of the VVols configuration steps, and then perform the same configuration tasks but with fewer steps and from a single interface using the Dell EqualLogic Virtual Storage Manager vSphere plugin.

However, the fun didn’t stop at VMworld! Since then we’ve made a few performance tweaks to make things run faster, and now we are proud to announce that the lab is available and accessible around the clock at http://labs.hol.vmware.com/HOL/#lab/1513

Oh, and to brag a little bit, we were the only one of VMware’s many Storage Partners to do this. Just another example of Dell and VMware working hand-in-hand.

For those of you asking “What are VVols, and why should I care?”, let me summarize it for you into one word. Granularity. Literally, that is hundreds of blog posts and slide decks summarized into one word. Granularity. And what do I mean by that? VVols enable block storage to be VM-aware, and as your SAN, if I am aware of you and understand you, I can do things better with you. But better in what way?

Granularity. There is that word again. Today when we use SAN snapshots to protect virtual machines, we do it at the datastore/volume/LUN level (datastores, volumes, and LUNs are all the same thing, it just depends on who in the datacenter you are talking to). This means that the virtual machine you want to protect, that business critical SQL database server, is bringing some baggage with it, which is that not so important but chatty file & print server. This works, but adds overhead and inefficiencies.

With VVols, you pick the individual virtual machine you want to protect, and nothing else. An individual virtual machine can have a very particular protection schedule because of business need or SLA or simply because “Hey, we can do that? Cool!” Oh, and did I mention that you’ll do this directly from the vSphere Web Client? And that it will be faster? As for the file & print server, I’ll still be snapshotting that, but just once a month on the second Tuesday. Yup I created a snapshot schedule template called Patch Tuesday.

One more thing, because I know some of you don’t find data protection exciting. Can I interest you in the health benefits of VVols? With VVols your virtual machine is now a series of volumes on the array, and as a VM-aware array we know which volumes go together to make up a particular virtual machine. This means that when you want a copy of a virtual machine, and VMware ask us do the work, all we have to do is clone a few volumes. And cloning volumes is old hat for an intelligent virtualized array architecture like EqualLogic.

So how is this a health benefit? How long does it take to clone a virtual machine from template? Personally I’ve no idea, because I just go get another cup of coffee, and it is done when I get back. But with VVols it is done before I leave the cube. Sometimes I still go and get coffee.

Still have a thirst for more information about VVols? VMware has already published over 400 sessions from last month’s VMworld 2014 in San Francisco. These can all be accessed from this page: http://www.vmworld.com/community/sessions/2014/

I recommend the “Virtual Volumes Technical Deep Dive” presented by VMware and “VMware VVOL Technical Preview with Dell Storage” co-presented by Dell and VMware. Between these two sessions, you’ll have a firm understanding of what VVols will mean for you and your datacenter.

Dell PowerEdge 13th Generation Servers certified with Ubuntu Server LTS 14.04

$
0
0

The Dell Linux Engineering team is pleased to announce the certification of Dell PowerEdge 13th Generation servers R730, R730xd, R630 & T630 with Ubuntu Server 14.04 LTS Edition. Customers looking to deploy Ubuntu Server can choose PowerEdge 13th Generation servers and 14.04 LTS with confidence knowing that Ubuntu 14.04 LTS is supported by Canonical for 5 years. 

  • For a quick glance at the Ubuntu Support Matrix for PowerEdge servers, click here.
  • For specific server certification details, visit Canonical’s Hardware Compatibility List.

Deploying to the cloud

Ubuntu 14.04 LTS includes Juju, a set of tools to easily deploy and orchestrate services to the cloud. Juju can be used together with MAAS (Metal-As-a-Service) to deploy services on bare metal. Both MAAS and Juju have been tested on all certified Dell PowerEdge 13th Generation servers.

Support 

Ubuntu support is available from Canonical through the Ubuntu Advantage program. Best-effort support from Dell is available with your Dell ProSupport contract. For questions and general discussion, contact our mailing list Linux-PowerEdge, we welcome your participation and feedback.

USB3 Kernel Debugging with Dell PowerEdge 13G Servers

$
0
0

This blog was originally written by Thomas Cantwell, Deepak Kumar and Gobind Vijayakumar from DELL OS Engineering Team. 

Introduction -

Dell PowerEdge 13G servers (Dell PowerEdge 13G servers) are the first generation of Dell servers to have USB 3.0 ports across the entire portfolio.  This provides a significant improvement in data transfer speed over USB 2.0.  In addition, it offers an alternative method to debug Microsoft Windows OS, starting with Windows 8/Windows Server 2012 and later.

Background –

Microsoft Windows versions have had the ability to kernel debug using USB 2.0 since Windows 7/Windows Server 2008 R2, though there were some significant limitations to debugging over USB2.

1)      The first port had to be used (with a few exceptions – see http://msdn.microsoft.com/en-us/library/windows/hardware/ff556869(v=vs.85).aspx ). 

2)      Only a single port could be set for debugging.

3)      You had to use a special hardware device on the USB connection. (http://www.semiconductorstore.com/cart/pc/viewPrd.asp?idproduct=12083).

4)      It was not usable in all instances – in some cases, a reboot with the device attached would hang the system during BIOS POST.  The device would have to be removed to finish the reboot and could be reattached when the OS started booting.  This precluded debugging of the boot path.

USB 3.0 was designed from the ground up, by both Intel and Microsoft, to support Windows OS debugging and has much higher throughput and is not limited to a single port for debugging.

Hardware Support - 

As previously stated, Dell PowerEdge 13G servers will now support not only USB 3.0 (also known as SuperSpeed USB), but also USB 3.0 kernel debugging. 

BIOS settings -

Enter the BIOS and enable USB 3.0 – it’s under the integrated devices category (By default, it is set to Disabled).

  • IMPORTANT!  ONLY enable USB 3.0 if the operating system has support!  Windows 8/Windows Server 2012 and later have this capability.  If you enable this and the OS does NOT have support, you will lose USB keyboard/mouse support when the OS boots.

Ports –

  • USB 3.0 ports on Dell PowerEdge 13G servers can be used for Windows debugging (with USB 3.0 enabled and the proper OS support). 
  • Some systems, such as the Dell PowerEdge T630, also have a front USB 3.0 port.  The Dell PowerEdge R630/730/730XD have only rear USB 3.0 ports.  The Dell PowerEdge M630 blade also has one USB 3.0 front port.

Driver/OS support -

USB 3.0 drivers are native in Windows Server 2012 and Windows Server 2012 R2. There is no support for USB 3.0 debugging in any prior OS versions.

USB3 Debugging Prerequisites –

  • A host system with xHCI(USB 3.0) Host Controller.  The USB 3.0 ports on the host system do NOT need USB 3.0 debug support – only the target system must have that.
  • A target system with xHCI(USB 3.0) Host Controller that supports debugging.
  • A USB 3.0 (A-A) crossover cable.You can get the cable from many vendors and we have provided one option below.
    • Note: The USB 3.0 specification states that pin 1 (VBUS), 2 (D-), and 3 (D+) are not connected. This means that the cable is NOT backwards compatible with USB 2.0 devices.
  • http://www.datapro.net/products/usb-3-0-super-speed-a-a-debugging-cable.html

Steps to Setup Debugging Environment -

  1. Make sure USB 3.0 is enabled in the BIOS on both host and target.
    1. All Dell 13G servers support USB 3.0 debugging.
    2. Verify OS on host and target systems.
      1. OS must be Win8/WS2012 or Win 8.1/WS2012R2 on both.  For the debug host, a client OS is perfectly fine for debugging a server OS on the debug target. 
      2. It is strongly recommended to use Windows 8.1 and/or Windows Server 2012R2 on the host to ensure you can get the latest supported Windows debugging software – and you can then debug all current and older OS versions (that support USB 3.0 debugging) from the host.

                                                               i.      http://msdn.microsoft.com/en-us/library/windows/hardware/ff551063(v=vs.85).aspx

2.On the target system with the help of the USBView tool, locate the specific xHCI controller which supports USB debugging. This tool is the part of the windows debugging tools.  See the following link: http://msdn.microsoft.com/en-in/library/windows/hardware/ff560019(v=vs.85).aspx .

To run this tool, you must also install .Net 3.5. It is not installed by default on either Windows 8/2012 or Windows 8.1/2012R2.

  1. On Dell PowerEdge 13G servers, there are several USB controllers – some will be designated “EHCI” – this is a USB 2.0 controller.  The controller for USB 3.0 will be designated “xHCI”, as we see below – this is the USB 3.0 controller.
    1. You can see the bus-device-function number of the specific xHCI controller which will be used for debugging below – this is important for proper setup of USB 3.0 debugging

2.Next, you need to find the specific physical USB port you are going to use for debugging.

  1. For Dell servers the “SuperSpeed” logo is presented beside any USB3 ports

  • Connect any device to the port you wish to use for debugging
  • Observe changes in USBView (you may have the refresh the view to see the new device inserted in the port)
  • Verify the port does indeed show it is “Debug Capable” and has the “SS” on the port.    

 

 

  1. Operating System Setup – Target system
    1. Open elevated command prompt on target system and run following commands.
  • bcdedit /debug on
  • bcdedit /dbgsettings usb targetname:Dell_Debug (Any valid test name can be given here)
  • bcdedit /set "{dbgsettings}" busparams b.d.f(here provide bus, device and function number of the required xHCI Controller)
    • Ex : bcdedit /set "{dbgsettings}"busparams 0.20.0 . 
    • Busparams settings are important since Dell PowerEdge 13G systems have multiple USB controllers.  This ensures debugging is enabled only for the USB 3.0 (xHCI) controller.
    • Reboot server after making the changes above!

2. Connect the host and target system by using the USB A-A crossover cable. Use the port identified at above.

Steps to Start the Debugging Session :-

Open compatible version of WinDbg as administrator (very important!). 

  1. Starting the debug session as administrator will ensure the USB debug driver loads.
  2. For USB 3.0 debugging, the OS on the host must be Windows 8/Server 2012 or later and match the “bitness” of the target OS - either 32-bit (x86) or 64-bit (x64). If you are debugging a 64-bit OS (all 2012+ Windows Server versions are 64-bit), then the host OS should be 64-bit as well.
  3. Open File->Kernel Debug->USB and provide the target name you set on the target (in our example,  targetname: Dell_Debug) . Click OK


 

4.USB debug driver will be installed on host system. This can be checked in the Device Manager (see below) and for successful debugging there should not be a yellow bang on this driver. It is OK that it says “USB 2.0 Debug Connection Device” – this is also the USB 3.0 driver (works for both transports). This driver is installed the first time USB debugging is invoked. 

Notes:

  1. If you are debugging for an extended time, disable the “selective suspend” capability for the USB Hub under the xHCI controller where USB debug cable is connected.

In Device Manager:

  1. Choose the “View” menu
  2. Choose “View devices by connection”.  There are multiple USB hubs, so to get the correct hub, you need to find the hub that is under the xHCI controller.
  3. Navigate the specific xHCI controller.
  4. Under the xHCI controller, you will see  USB Hub. 
  5. Choose Properties for the USB Hub,and then Power Management and uncheck “Allow the computer to turn off this device to save power” (see picture below).

 

Summary – Dell PowerEdge 13G servers with USB 3.0 provides a significant new method to debug modern Windows operating systems.

  • USB 3.0 is fast.
  • USB 3.0 debugging is simple to set up.
  • USB 3.0 debugging requires a minimal hardware investment (special cabling).
  • For Dell PowerEdge 13G blades (M630), this provides a new way to debug an individual blade.  Prior methods to debug Dell blades used the CMC (Chassis Management Controller), and routed serial output from the blade to the CMC – harder to configure and limited to serial port speeds.
  • A nice comparison of debug transport speeds – somewhat dated, but gives a good general idea on the speeds.

TransportThroughput (KB/s)Faster than Serial
Serial Port100%
Serial Over Named Pipe50500%
USB2 EHCI1501500%
USB2 on the Go200020000%
KDNET240024000%
USB3500050000%
139417000170000%

Another Avenue To Avoiding Widespread Vulnerabilities: Small-Footprint Wyse ThinOS-Based Thin Clients

$
0
0

Aside from its dramatic name, two things stood out about the most recent widespread computing vulnerability known as Shellshock. One was the possibility that the ability to use Bash commands in UNIX or Linux to take control of an endpoint may have been undiscovered for as long as two decades. The other was simply the scope of the vulnerability, which potentially impacted hundreds of millions of users globally, given its inclusion at the core of two of the most widely used programming languages.

 

The open source community and several major software companies issued patches this week, but Shellshock-related vulnerabilities may persist since a percentage of IT departments may not patch their endpoints while still others may not address the issue with the urgency it requires. Vulnerabilities may also persist because hastily-distributed updates to urgent vulnerabilities are not always entirely comprehensive.

 

As a result, in addition to exploring security options from Dell Data Protection Solutions, one option your IT team may not have considered is investing in a well-provisioned cloud client-computing solution that leverages Wyse ThinOS, the virus-immune firmware base running on our Wyse thin and zero client devices.

 

The Benefits of ThinOS

 

Given its inherent architecture, ThinOS provides an important layer of virus and malware protection at the edge. While most traditional security remediations, such as corporate packet sniffers, firewalls, and anti-virus protection suites, can identify and eliminate malware on your datacenter servers and in your users’ virtual machines, ThinOS can keep your users’ physical endpoints virus free, given its zero attack surface.

 

Bash, a common feature in UNIX and Linux environments, allows software developers and IT managers to run operating system commands simultaneously or within operational commands. However, it was only recently discovered that Bash commands can create vulnerabilities on endpoints or in Web servers running UNIX or Linux-based code. Those vulnerabilities have been nicknamed Shellshock, most likely because they leverage the Bash shell. Although the risk arising from Shellshock to Dell cloud client-computing products is low, we are actively working to eliminate any vulnerability.

 

Cloud Client Manager servers have already been updated with available security patches to eliminate this threat, and we will continue to monitor new patches and apply them proactively on our Cloud Client Manager servers as they become available. We are working with our partners SUSE and Canonical, who have issued patches to address Shellshock vulnerability in SUSE Linux 11 SP3 and for Ubuntu respectively. We are now validating these patches on our Linux operating system firmware images and Linux-based thin clients.

Fewer Malware Vulnerabilities

 

Enterprise IT departments that have migrated users to Wyse thin client and zero client endpoints running our proprietary ThinOS can expect far fewer malware remediation issues. Command line vulnerabilities are generally mitigated due to the closed nature of the operating system, and because ThinOS excludes Bash software by design. Perhaps most importantly, Wyse thin client and zero client devices running ThinOS maintain a smallOS footprint of less than 15MB that is not stored locally.

 

This absence of a local hard drive presents a smaller attack surface from which outside code might launch malware attacks. Wyse architecture and our ThinOS design will help your IT team avoid future vulnerabilities that might take advantage of Shellshock or SSL-based vulnerabilities such as “Heartbleed,” which had the potential to impact as many as two-thirds of all Web pages last April. Because Wyse ThinOS is not susceptible to Heartbleed or Shellshock vulnerabilities, it will likely not be susceptible to similar exploits in the future due to the security advantages of the ThinOS architecture. This means your IT managers and users will have fewer worries running ThinOS. Contact your Dell representative for more information.

Enabling USB 3.0 on Dell PowerEdge 13G Servers with Microsoft Windows Server Operating Systems

$
0
0

This blog was originally written by Gobind Vijayakumar and Perumal Raja from DELL OS Engineering Team. 

With Dell PowerEdge 13G servers, we have provided the option of enabling support for USB 3.0 for your operating system. This blog focuses on the enabling this feature with Microsoft Windows Server Operating Systems. 

Firstly, USB 3.0 is disabled by Default in System BIOS; follow the steps below to enable USB 3.0 on your server.

  1. During POST, press “F2” to enter System Setup. Then Select “System BIOS”
  2. From the list of option - Select “Integrated Devices” option.
  3. Then you can Enable/Disable USB 3.0 using the USB3.0 setting as below.

IMPORTANT!  ONLY enable USB 3.0 if the operating system has support! Or you may lose basic functionality like Keyboard/Mouse.

USB 3.0 driver support in Microsoft Windows Server OS

USB 3.0 drivers are native in Windows Server 2012 R2 and Windows Server 2012 and don’t require any additional configuration. You can go into the BIOS to enable/disable the USB 3.0 feature. Windows Server 2012 and Windows Server 2012 R2 should work fine without any issues.

Windows Server 2008 R2 SP1 doesn’t have native (in-box) driver support for USB 3.0 which may create issues during Windows Server 2008 R2 SP1 installation with USB 3.0 enabled. You can follow one of the below methods to install Windows Server 2008 R2 SP1 along with USB 3.0 driver to make use of this feature.

For Windows Server 2008 R2 SP1 here is the link for USB3 drivers. (alternatively, they will be listed on support.dell.com on the particular driver download page for the server)

Method #1 (using Dell LifeCycle Controller)

  1. Make sure USB 3.0 is disabled before this installation
  2. Boot the system and press F10 at POST to enter LifeCycle Controller
  3. Connect your DVD ROM with Windows Server 2008 R2 SP1 media
  4. Select “Deploy OS” option under OS Deployment.

 

5.On the next screen, you can create/Initialize the RAID if not already configured else select “GO Directly to OS deployment” option and click NEXT.

6. On the Next screen, select the Boot mode and Operating System as “Windows Server 2008 R2 SP1”. Then click NEXT

Note: Secure Boot is only supported with Windows Server 2012 or later.

7.Once selected, Dell Lifecycle controller pulls in required drivers which include the USB 3.0 driver for OS deployment.

8. After this stage, please select the mode of installation required and complete the OS installation from the media.

9. Enable USB 3.0 in BIOS and OS boots up with required drivers installed.

Method #2

  1. Disable USB 3.0 from the server BIOS settings using the steps given above.
  2. Create a folder with name “$WinPEDriver$” in a USB drive and copy the USB 3.0 drivers into the folder from the above link.
  3. Plug the USB drive into one of the ports and start the Windows server 2008 R2 SP1 installation.
  4. USB 3.0 drivers will be automatically loaded from the USB drive during the OS installation.
  5. After OS installation completes, reboot the server and enable USB 3.0 from the server BIOS settings.
  6. Server boots into the OS with required USB 3.0 driver installed.

Method #3 (Post-install)

  1. Download the USB 3.0 driver from the above link provided and extract the files.
  2. Disable USB 3.0 from the server BIOS settings and install Windows server 2008 R2 SP1 OS.
  3. After completing the OS installation, boot into the OS.
  4. Open a “Command Console” and type the below command
      1. Pnputil.exe –a <Path to inf file of the USB 3.0 driver> 

                             Note: Don’t use “-I” switch in the above command.

        5.Reboot the server and enable USB 3.0 from the server BIOS settings.

        6.Server boots into the OS with required USB 3.0 driver installed.

References/Additional information

1)      Microsoft Support for USB 3.0 - http://msdn.microsoft.com/en-us/library/windows/desktop/hh848067(v=vs.85).aspx

2)      Dell LifeCycle controller - http://en.community.dell.com/techcenter/systems-management/w/wiki/lifecycle-controller/

3)      USB 3.0 Kernel Debugging - http://en.community.dell.com/techcenter/b/techcenter/archive/2014/09/30/usb3-kernel-debugging-with-dell-poweredge-13g-servers

Keynotes, Sessions, and BOGO – OH MY! Dell World & Dell World User Forum Updates Inside!

$
0
0

BOGO

The Dell World User Forum (#DWUF) “Buy 1 Get 1” (BOGO) offer has been extended! Take advantage of this amazing offer to bring a colleague for free. Plus, don’t forget that your Dell World User Forum pass will gain access to the Dell World (#DellWorld) main event, keynotes and sessions.

Dell World keynotes

Dell World opening keynote: The opening session of Dell World 2014 will set the stage for Dell’s role in the exciting new reality of how the next generation of technology solutions will transform lives, businesses and economies. Read more >

  • Michael Dell, Chairman and Chief Executive Officer, Dell
  • Tom Reilly, Chief Executive Officer, Cloudera
  • Shyam Sankar, President, Palantir
  • Michael Chui, Partner, McKinsey Global Institute

Afternoon keynote: This lively session, featuring Erik Brynjolfsson and Andrew McAfee, will be moderated by reddit co-founder, investor and entrepreneur Alexis Ohanian. Topics will include the impact of personal technology, the near-boundless access to information and how they enrich our lives. Read more >

  • Alexis Ohanian, Entrepreneur and investor, Co-founder of the social news site reddit
  • Andrew McAfee, Principal Research Scientist at MIT
  • Erik Brynjolfsson, Director, MIT Initiative on the Digital Economy; Professor, MIT Sloan School, Chairman, Sloan Management Review; Research Associate, National Bureau of Economic Research

Dell World closing keynote: Michael Dell will be joined by disruption leaders — people inspiring a new way of thinking and leading cultural change. Read more >

  • Michael Dell, Chairman and Chief Executive Officer, Dell
  • Dr. Peter H. Diamandis, Chairman and CEO, X PRIZE Foundation

Dell World User Forum Sessions

Over 160 sessions spanning from technical in-depth sessions & hands-on-labs, to solution overview content that will fill your schedule and your mind with knowledge to take back to the office and positively impact your business!

View all of the User Forum sessions and build your agenda today!

Dell World Sessions

More than 50 amazing sessions on Cloud, Security, Mobility and Big Data

Check out our full lineup of more than 50 remarkable sessions that will allow you to experience our innovative hardware, software and services firsthand, through our customers' stories and their successful use cases. Discover strategies to simplify complex IT challenges, drive out inefficiency and — most important — ignite your entrepreneurial spirit so you can help your organization seize the opportunities created by today’s changing world. Complete session list >

Need More? Here is your whip cream and cherry to add to the banana split of unlimited flavors happening at Dell World!

Don’t forget, we are extending the chance to bring a colleague for free. Take advantage of the Buy One Get One (BOGO) registration offer. Register Today!


Dell World User Forum – Three Seasoned Veterans Tell You Why It’s Worth It

$
0
0

Still deciding whether to spring for Dell World User Forum 2014? It’s coming up November 4-7 in Austin, and we’re doing everything we can to make it easy for you to attend. I want to point out a few highlights of what you can expect, especially as a KACE customer, then I’ll let a few DWUF veterans tell you about their ROI.

BOGO and a free pass to Dell World

User Forum is a lot more than panels and exhibits. User Forum brings you together with the Dell experts – engineers, architects, product managers – who build and support the products you work with day in and day out. It’s your chance to come in with a five-pound bag of questions and get them answered face to face, whether in a lab, at the Geek Bar or in a hallway.

User Forum features hands-on labs in which you can finally sit down for that hour you’ve been promising yourself and dive deep into Dell products like KACE appliances. As soon as you get back to the office – and sometimes even before then – you’ll start to see a return on your investment in productivity.

And speaking of investment, we have two financial incentives for you:

  • Your User Forum pass includes a pass to the Dell World main track event as well. Catch keynotes and presentations by Peter Diamandis of the X PRIZE Foundation, Erik Brynjolfsson of MIT and Michael Dell of – well, you know which company he works for.
  • We’re running a BOGO – a Buy-One-Get-One offer so you can share a pass with a colleague at no additional charge.

If you or your organization is a current Dell customer, User Forum is the place for you.

Sessions

Here are some of the most popular KACE sessions and labs to look for:

  • K1000 Advanced Topics. Our engineers will help you understand what's under the covers of your K1000. You’ll take away a deeper understanding of how best to use this systems management platform in your environment.
  • Software Packaging/Scripting. We’ll talk about packaging, with real-world examples of tough deployments.
  • Software Distribution. We’ll go beyond the basics to some unconventional wisdom around deploying software, including large installers, complex installers and repackaging.
  • Patching: Getting Started, and Going Beyond Basics. Learn how to patch your environment with the K1000, then design a sustainable patching system with integrated automation and reporting.
  • Troubleshooting the K1000. Understanding how to debug is a skill all admins should hone regularly.

We’ll cover these topics and more in breakout sessions, self-paced labs and hands-on labs led by instructors.

3 veterans weigh in on Dell World User Forum

But you don’t have to take my word for it. I asked a few real-world system administrators why they think User Forum is worth it. Here are some of their answers:

  • Ron Falkoff, System Analyst, Mary Institute and Saint Louis Country Day School (Missouri)

“User Forum was productive for us because it accelerated our use of the KACE appliances. This is when we can raise specific issues we are having with others during birds-of-a-feather, or just with our peers in other industries having the same experiences.

“On the way home one year, I implemented Smart Labels from San Francisco Airport, they populated by the time I got to O’Hare, and I began using them when I got to the St. Louis Airport. Another year, I fixed patching remotely on my appliance while talking to an expert at the Geek Bar.

“I would tell people to not miss the product feedback, and to go to one session outside their comfort zone.”

  • Stacy Crotser, Computer Lab Administrator, Sam M. Walton College of Business, University of Arkansas

“I got some great stuff out of the instructor-led labs. Getting to watch a presentation, then jumping right in to try it myself was fantastic! I learned lots of techniques and implemented them when I got home. I have referred time and again to the USB key with all the instructor-led presentations and training sessions. That USB key was the single best thing I have EVER gotten from a conference, and I got value out of it all year long.

“If you are a KACE administrator, then the User Forum is a MUST! There was more information jam-packed into the conference than you could pick up otherwise in a whole year.”

  • Deedra Pearce, Director of Information Systems, Green Clinic Surgical Hospital (Louisiana)

“Before I attended User Forum, I didn't interact much with our KBOXes; I did what I needed to do, then got right out, so I didn't realize all the capabilities they had. At User Forum it was so nice to see all the software integration, especially with all our other appliances. As soon as we returned, we got involved in the 6.0 update and couldn't wait to use the new user-friendly dashboard we’d learned about.

“As IT director I learned so much at User Forum. I’ve been able to help make our CIO’s daily job so much easier in the past year as I've learned more about the system. User Forum has helped me develop a relationship with Dell, and especially with the KACE team.”

Your turn

So there you have them – three seasoned veterans telling you why Dell World User Forum 2014 is worth it. As KACE training lead, I keep in regular contact with peers at lots of companies using KACE. The networking and user base at User Forum grow steadily year upon year.

  • Have a look at the User Forum agenda and start picking out the labs and sessions you want to attend.
  • Double your internal expertise by grabbing your BOGO now. Register for User Forum and get a pass to all Dell World sessions.

Overview of SELinux in RHEL7

$
0
0

This blog post is originally written by Srinivas Gowda G from Dell Linux Engineering group

Dell recently announced support for RHEL7 on Dell PowerEdge Servers. As is the case with any SELinux (Security Enhanced Linux) enabled OS, your applications might hit SELinux surprises with RHEL7. Applications that ran perfectly well on RHEL6 might now complain about SELinux denials in RHEL7.  Most of the time when faced with SELinux issues the first temptation is to disable SELinux or switch to Permissive mode. Not the best thing to do!!! .

In this article I intend to give an overview of SELinux architecture, demonstrate usage of some of the SELinux utilities available as part of RHEL7 and a few pointers on how best to deal with SELinux denials. SELinux provides an excellent Access Control Mechanism built into Linux. It was originally developed by US National Security Agency but is now part of various Linux distributions including RHEL7 providing enhanced security to Linux operating systems.

Most Linux users are familiar with DAC (Discretionary Access Control),  the various degrees of permission levels that can be assigned to files in Linux as shown in this quick example:

$ ls -l foo 

  -r-xr-xr-x  1 test testgroup   112 Oct  3  2013 foo.txt

SELinux implements Mandatory Access Control (MAC) over the existing DAC.  In DAC the owner of the object specifies which subjects can access the object. Say for the above file foo, you want to give read and write permission to all the users and groups, and also in addition provide execution permission to user “test”. We can use chmod command to provide these permissions 

$ chmod 766 foo

$ ls -l foo

   -rwxrw-rw-  1 test testgroup   112 Oct  3  2014 foo.txt

As we see in DAC, control of access to files is based on the discretion of the owner. Access is controlled based on Linux user and group IDs. 

Configuring SELinux

One of the important file in the configuration space is /etc/selinux file, here is a sample output of the default setting you might find in RHEL7

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux security policy is enforced.

#     permissive - SELinux prints warnings instead of enforcing.

#     disabled - No SELinux policy is loaded.

SELINUX=enforcing

# SELINUXTYPE= can take one of these two values:

#     targeted - Targeted processes are protected,

#     minimum - Modification of targeted policy. Only selected processes are protected.

#     mls - Multi Level Security protection.

SELINUXTYPE=targeted 

To make persistent changes of SELinux status or protection type (MLS/targeted) you must first edit the /etc/selinux configuration file and restart the operating system.

By default RHEL7 enables SELINUX in enforcing mode and SELINUXTYPE is targeted. There are good numbers of utilities that can be used to play around SELinux; I’ll try to cover as much as possible in this blog and restrict myself with the default “targeted” policy.

sestatus utility can be used to check the current status of SELinux on your RHEL7 system.

$ sestatus

SELinux status:                 enabled

SELinuxfs mount:                /sys/fs/selinux

SELinux root directory:         /etc/selinux

Loaded policy name:             targeted

Current mode:                   enforcing

Mode from config file:          enforcing

Policy MLS status:              enabled

Policy deny_unknown status:     allowed

Max kernel policy version:      28

setenforce can be used to change the selinux mode in a non-persistent way.

$ setenforce

  usage:  setenforce [ Enforcing | Permissive | 1 | 0 ]

Similarly getenforce utility can be used to get the status of SELinux

$ getenforce

Enforcing

Security Context

Each file, users and process on a SELinux enabled system has a Security label called context associate with it. Here are some examples of how you can check the context of a file, process or users

 If I were to look at the SELinux context of /bin then you can use ls -dZ or you can prefer to use secon which is more descriptive.

$ secon -f /bin/

     user: system_u

     role: object_r

     type: bin_t

     sensitivity: s0

     clearance: s0

     mls-range: s0

 In this example we have a file myFile for which SELinux provides a user (unconfined_u), a role (object_r), a type (user_home_t), and a level (s0) and it is this information that are used to make the access control decision in SELinux.

$ ls -Z myFile

-rw-rw-r--. user1 group1 unconfined_u:object_r:user_home_t:s0 file1

Similarly for a process whose PID is 3161 you can check the SELinux context

$ secon -p 3161  ( or use ps –eZ)

user: unconfined_u

role: unconfined_r

type: unconfined_t

sensitivity: s0

clearance: s0:c0.c1023

mls-range: s0-s0:c0.c1023

sestatus and secon is part of policycoreutils package. To list SELinux confined users

$ semanage user -l

                Labeling   MLS/       MLS/                         

SELinux User    Prefix     MCS Level  MCS Range                      SELinux Roles

guest_u            user       s0              s0                                   guest_r

root                  user       s0              s0-s0:c0.c1023               staff_r sysadm_r system_r unconfined_r

staff_u              user       s0             s0-s0:c0.c1023                staff_r sysadm_r system_r unconfined_r

sysadm_u         user       s0             s0-s0:c0.c1023                sysadm_r

system_u         user       s0              s0-s0:c0.c1023                system_r unconfined_r

unconfined_u   user       s0              s0-s0:c0.c1023                system_r unconfined_r

user_u             user       s0              s0                                    user_r

xguest_u          user      s0              s0                                     xguest_r

 semanage login -l gives the mapping between SElinux and linux users

$ semanage login -l

Login Name           SELinux User         MLS/MCS Range        Service

 __default__          unconfined_u         s0-s0:c0.c1023           *

 root                      unconfined_u         s0-s0:c0.c1023           *

 system_u              system_u              s0-s0:c0.c1023           *

The default SELinux policy used in RHEL7 is "Targeted Policy". In targeted policy, type translates to a domain for a process and type for a file. An executable of type_t transitions to a domain domain_t as defined by Entry point permissions defined in policy tables. If you wish to change security labels then you can do so by using chcon or semanage utility. Changes made using chcon is not persistent and is lost when file systems are relabeled.  If you are dealing with SELinux related issues it is very important that you know how to set correct or appropriate context to files and directories. This will help you solve majority of SELinux denials.

The following example tries to demonstrate these steps. I have a user “linux” whose home directory has a folder called Music already present

 # Directory Music has type audio_home_t

$ ls -dZ

  drwxr-xr-x. linux linux unconfined_u:object_r:audio_home_t:s0 Music

 # Now I will create a dummy file file1 inside "Music" folder and check the context of this new file.

$ touch Music/file1

$ ls -dZ Music/

-rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file1

 # Let’s use chcon to relabel security label type of directory "Music" from audio_home_t to admin_home_t

$ chcon -t admin_home_t Music

$ ls -dZ Music/

drwxr-xr-x. linux linux unconfined_u:object_r:admin_home_t:s0 Music

 #Now that we have relabeled, lets create file2 inside "Music" 

$ touch Music/file2

 #unlike file1 which had "audio_home_t" , we now see that the file2 is labeled as "admin_home_t" which is same as its parent directory

$ ls -Z Music/

-rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file1

-rw-rw-r--. linux linux unconfined_u:object_r:admin_home_t:s0 file2

 restorecon is a SELinux utility that is mainly used to reset the security context of files or directories. It only modifies type portion of the security context of objects with preexisting labels.

# Restore files to default SELinux security contexts. All the previous changes will be reverted.

$ restorecon -R .   

# We now can see that both Music and files inside this directory are relabeled to the default values.

$ ls -dZ Music/

  drwxr-xr-x. linux linux unconfined_u:object_r:audio_home_t:s0 Music

$ ls -Z Music/

  -rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file1

  -rw-rw-r--. linux linux unconfined_u:object_r:audio_home_t:s0 file2

You can’t relabel files with arbitrary types.

$ chcon -t dummy_type_t Music/file1

  chcon: failed to change context of Music/file1 to unconfined_u:object_r:dummy_type_t:s0 : Invalid argument

However new types can be created by writing new policy rules

If you are wondering how to validate the correctness of security context of objects then refer to matchpathcon utility. matchpathcon utility can be used to check the default security context associated with a file. matchpathcon queries the system policy to find out the correct context to files and reports the same.

$ matchpathcon -V /home/linux/Music/file2

  /home/linux/Music/file1 has context unconfined_u:object_r:admin_home_t:s0, should be unconfined_u:object_r:audio_home_t:s0

By now you must be wondering where these context rules are defined. In RHEL7 /etc/selinux/targeted/contexts is where the context of all the files is defined.

As we have seen, changes made via chcon are not persistent. Non default changes made via chcon can be reverted using restorecon utility. semanage fcontext command is used to persistently set the context of files. Similar to the example above let’s try to change the context of a file, but this time let’s do a persistent change.

# Lets create a folder called config under /root

$ mkdir config

# By default policy labels the directory as admin_home_t

$ ls -Zd config/

  drwxr-xr-x. root root unconfined_u:object_r:admin_home_t:s0 config/

 

# Lets relabel to config_home_t

$ chcon -t config_home_t config/

# As expected matchpatcon complains about the wrong context because default policy for config directory under /root must be admin_home_t

$ matchpathcon -V config/

  config has context unconfined_u:object_r:config_home_t:s0, should be system_u:object_r:admin_home_t:s0

 

# Using restorecon the context reverts back to the default target label as defined by the policies

$ restorecon -v config/

$ restorecon reset /root/config context unconfined_u:object_r:config_home_t:s0->unconfined_u:object_r:admin_home_t:s0

# Now let’s write a new security context rule that will relabel "config" folder under root to config_home_t

$ ls -Zd config/

  drwxr-xr-x. root root unconfined_u:object_r:admin_home_t:s0 config/

$ semanage fcontext -a -t config_home_t  "/root/config"

$ restorecon -R -v .

  restorecon reset /root/config context unconfined_u:object_r:admin_home_t:s0->unconfined_u:object_r:config_home_t:s0

$ ls -Zd config/

  drwxr-xr-x. root root unconfined_u:object_r:config_home_t:s0 config/

# Let`s create files under /root/config/ folder and check its context

$ touch foo1

$ touch foo2.config

# Here foo1 and foo2.config seems to inherit the context as its parent directory

$ ls -Z config/*

  -rw-r--r--. root root unconfined_u:object_r:config_home_t:s0 config/foo1

  -rw-r--r--. root root unconfined_u:object_r:config_home_t:s0 config/foo2.config

# Let’s say I want only files inside /root/config  with .config extension to have security context as config_home_t

$ semanage fcontext -a-a -t config_home_t  "/root/config(/.*\.config)"

$ restorecon -R -v .

  restorecon reset /root/config/foo1 context unconfined_u:object_r:config_home_t:s0->unconfined_u:object_r:admin_home_t:s0

 

$ ls -Z config/

  -rw-r--r--. root root unconfined_u:object_r:admin_home_t:s0 foo1

  -rw-r--r--. root root unconfined_u:object_r:config_home_t:s0 foo2.config

You must be able to find all these new rules added under "file_contexts.local"

$ cat /etc/selinux/targeted/contexts/files/file_contexts.local

 # This file is auto-generated by libsemanage

 # Do not edit directly.

 /root/config    system_u:object_r:config_home_t:s0

 /root/config(/.*\.config)    system_u:object_r:config_home_t:s0

Now that we have managed to modify the security context of files let’s move on to the other most useful technique to solve SELinux denials. 

Booleans

SELinux provides a set of Booleans that helps change some of the SELinux policies.  semanage boolean -l  lists SELinux policies that can be changed at run time. Booleans are pretty useful since changes can be made easily without writing policies.

getsebool -a  is used to list the current status of the these Booleans, setsebool can be used change the status of the available Booleans

 

$ getsebool puppetmaster_use_db

  puppetmaster_use_db --> off

$ setsebool -P puppetmaster_use_db on

$ getsebool puppetmaster_use_db

  puppetmaster_use_db --> on

  -P flag make these settings to be persistent across reboots.

If you are interested to know all the existing SELinux rules on your system then there is utility called sesearch which helps you find that(use --all to list all the rules). sesearch is part of setools-console package which contains other useful utilities such as findcon, seinfo etc.. 

Access Vector Cache (AVC)

SElinux policies are always evolving, new rules might get added while old ones may get refined or removed. This invariably means SELinux denying access to some operations of your application. These denials are usually referred as AVC denials.

SELinux uses a cache called Access Vector Cache (AVC) that caches all the successful and failure access.  These AVC denials are usually logged in /var/log/audit/audit.log, they can also be logged in system logs but it depends on which demons are running (auditd, rsyslogd, setroubleshootd). In an X windows system if you have setroubleshootd and auditd running, then a warning message is displayed on the console. If you do not have a GUI console, then you can either use ausearch which is part of audit package or inspect the audit.log or system log using good old “grep”.

SELinux provides an easy way to fix the denials using audit2allow. This is a pretty helpful utility that generates SELinux policy using the audit logs and get rid of the denials. But using this is something that you want to avoid if you are not sure about the policies are being added.  Most of the denials can be dealt by either changing the context of files in conflict or enabling the available Booleans. If these two changes don’t take care of your denials then care must be taken before you start adding new policies. 

RHEL7 Update

Some of the RHEL7 features such as systemd will trigger changes in security context of your applications in RHEL7, care must be taken by application owners when developing or porting applications to RHEL7. It’s important to understand the context transition of process and files. “sepolicy transition” command can be used to generate a process transition report. RHEL7 has addressed some of the File name transition issues of SELinux. If you are looking for more detailed information of SELinux on RHEL7 then refer to SELinux_Users_and_Administrators_Guide , and this document also provides an overview of new SELinux features in RHEL7.

Announcing: Foglight for OpenStack and KVM monitoring

$
0
0

The excitement around OpenStack is palpable. The conferences dedicated to the software are easily oversubscribed. Lots of analysts and bloggers alike are writing about the software, its adoption and pros (and cons).  Most importantly, many IT visionaries have started deploying OpenStack in their data centers.

Foglight for Virtualization is re-asserting its leadership by being the first to offer a complete new solution to monitor the OpenStack and KVM hypervisor environment.

One of the main uses for OpenStack right now is establishing a private cloud in the data center. The workloads in this case are not very different than what usually goes on VMware. But there are definitely DevOps and big data use cases that are popping up more and more with OpenStack.

Use of OpenStack for establishing private cloud using KVM hypervisor lets customers establish the departmental clouds, dramatically increase automation thus improving IT productivity, while at the same time mitigate vendor lock-in to reduce costs.

But use of OpenStack has its own challenges. Using OpenStack without a proven monitoring solution creates many problems for the administrators. The top concerns voiced by the administrators and architects that we talked to were:

  • Dynamic performance and availability information
  • Real-time information on workload placement and movement.
  • Historical data analysis to understand usage trends

Apart from this, the other asks were capacity planning, migration planning and chargeback.

Foglight for OpenStack extends Foglight’s industry-leading performance monitoring and analysis capabilities to OpenStack. The innovative solution connects to OpenStack services to collect configuration and performance information. Foglight for OpenStack monitors services (one or more instances that combine to form an instance of OpenStack cloud) that provide Compute, Networking and Storage services as well as KeyStone service that authenticates. Together, availability and performance of these services will result in actual perceived performance of OpenStack infrastructure.

Administrators are able to answer questions like service uptime status, available resources to service requests that are sent to them and version information.

Administrators also get information about hypervisors, templates and networks defined. This information (in dashboard form in a NOC environment or sent as a report) is crucial to ensure smooth functioning of the cloud.

The main dashboard with drill-down capabilities shows the actual cloud constructs and hypervisors, storage and workloads performance.  The information shown here comes from underlying hypervisors and instances (VMs). The information is aggregated for cloud constructs like Availability Zones defined in OpenStack and presented. Because OpenStack configuration is dynamic and can change anytime, Foglight keeps track of this information and updates the dashboards. Without this dynamic perspective, keeping track of VMs and hypervisors can be difficult.

The drill-down capabilities in Foglight allows administrators to start looking at performance problems at whatever level is appropriate and then follow the trail to reach root-cause of the problem. 

Helpfully, the dashboard also shows top performance hotspots that can be good starting points.

Foglight for OpenStack offers another very important feature over any other tool available (free or otherwise). Foglight has a built-in relationship engine that can create topology maps of any “related” objects.

In a dynamic and potential large environment like OpenStack, topology information is crucial. Knowing the infrastructure that the VM is running on at all times and all other related VMs that are sharing that same infrastructure can go a long way in understanding resource use and pain points.

This Unique topology view dashboard also shows alert status of each of the objects. That way, an administrator can know at a glance where the focus is needed in real-time. This not only makes the administrator proactive and saves lot of resources; it also leads to increased operational efficiency.

Foglight for OpenStack is amongst the first few to support OpenStack monitoring. It allows OpenStack administrators easy to use monitoring tools to keep track of workloads running in their private clouds. It provides near real-time and historical performance analysis to help IT operations efficient.

This is really just the beginning. We have already started to work on the next additions to the solution. Optimization – where Foglight will suggest right-sizes for the VMs and reclaim wasted resources – is in works. So is Capacity management and chargeback additions to make sense of usage trends and growth characteristics of the cloud. 

Using Windows PowerShell to update the BIOS on Dell PowerEdge servers

$
0
0

Dell iDRAC along with LC, provide a great set of server management and monitoring abilities. In the this whitepaper CIM Cmdlets with Dell iDRAC for remote management along with several previous blog posts located here, provide a holistic way to script the management of Dell PowerEdge servers. The new Dell PowerEdge 13G servers with iDRAC 8 enable increased levels of operational efficiency and flexibility with industry-leading systems management capabilities.

In this blogyou can understand how to update the BIOS on your server using PowerShell scripting. This script is easily extendable to run firmware, NIC updates etc. and is very flexible with your existing environment. It works on the 12th Generation with iDRAC7 as well as the new 13th Generation servers with iDRAC8. The first step is to set up a CIM session with the iDRAC using the IP Address, Username and Password. Using the session variable, you can run the remainder of the commands to the iDRAC. These include- setting up a job that points to a share (HTTP, CIFS Share, NFS share, TFTP, or FTP) and a reference target to install the Dell update package (DUP), next is to reboot the system to execute the DUP and process with the BIOS update.

Additional Resources

Managing Dell PowerEdge VRTX using Windows PowerShell

Dell PowerEdge R730xd – Easily scalable, high-density building block for Microsoft Exchange 2013 deployments

$
0
0

Generational improvements in Microsoft Exchange Server 2013 and Dell PowerEdge servers provide more ways for IT organizations to save money and time while meeting growing email capacity and performance demands. For example, IOPS have been significantly reduced in Microsoft Exchange Server 2013, enabling you to use higher capacity, lower speed disk drives for storing multiple copies of Exchange databases.

Deploying Microsoft 2013 on servers with dense internal storage such as the Dell PowerEdge R730xd reduces the hardware footprint and associated administration and energy costs, while providing easy scalability. Compared to its predecessor—PowerEdge R720xd–PowerEdge R730xd supports 33% more storage capacity when using the large form factor (LFF) NearLine-SAS (NL-SAS) drives. This is made possible with the innovative internal drive tray design that supports up to four LFF disk drives. Using an internal drive tray, you can now deploy up to sixteen 4 TB[1] LFF NL-SAS drives in the PowerEdge R730xd to get up to 64 TB of raw disk capacity.

This additional capacity and disk IOPS translate directly into additional and bigger Exchange mailboxes. The following examples show how PowerEdge R730xd can help you increase the mailbox density or size of each mailbox or messages per day significantly as compared to PowerEdge R720xd.

Note: The following comparisons are independent of each other and represent the benefits gained using a specific hardware configuration. Each example lists the hardware specifications for the servers used in the verification. PowerEdge R730xd 3.5-inch chassis is available in different configurations. Please check with your sales representative for details on these configurations.

50% increase in user mailbox size

The illustration below shows the increase in mailbox size between similar CPU and memory configurations of PowerEdge R720xd and PowerEdge R730xd with 16 internal drives. On both these configurations, the number of mailboxes and the messages per day profile was kept constant.

33% increase in number of mailboxes

The graphic below shows the increase in number of mailboxes between similar CPU and memory configurations of PowerEdge R720xd and PowerEdge R730xd with 16 internal drives. On both these configurations, the size of the mailboxes and the messages per day profile was kept constant.

33% increase in messages per day

The last comparison illustrates the increase in messages per day between similar CPU and memory configurations of PowerEdge R720xd and PowerEdge R730xd with 16 internal drives. On both these configurations, the number of mailboxes and the mailbox size were kept constant.

PowerEdge R730xd as the new Exchange building block

These improvements demonstrate why the increase in internal storage capacity combined with the reduced IOPS requirements in Exchange 2013 make the PowerEdge R730xd with 16 x LFF NL-SAS drives an ideal server platform for organizations facing increased email capacity demands. To leverage these technology advances, the Global Solutions Engineering team at Dell developed a building block approach for Exchange deployments. This approach uses a Pod as a standardized configuration of server and storage resources sized to the solution requirement (for example, number and size of mailboxes, number of emails per day and expected IOPS). The Pod shown below consists of a 3-copy DAG that has two servers at the local site and one at the remote site hosting a passive copy of the Exchange databases.

Figure 1 PowerEdge R730xd at the heart of Pod Design

The Pod design includes the minimum server and storage required to meet the Exchange solution requirement, including the mailbox profile. The configuration within each of the Pod in a deployed solution remains the same and must not change.

Scaling out Exchange Deployment

To cater to more mailbox requirements, you just need to increase the number of Pods.

Figure 2 Scaling out a Pod to support more mailboxes

With PowerEdge R730xd at the heart of the Pod design (shown in Figure 1), the above Exchange deployment, shown in Figure 2, can support up to 16,000 Exchange mailboxes of 3 GB in size, with  4,000 mailboxes per Pod.

Scaling up Exchange Deployment

If your Exchange deployment requires larger mailbox capacity (up to 5 GB per mailbox), you can extend the above Pod design by attaching a JBOD, such as PowerVault MD1400 with similar size drives as the PowerEdge R730xd. In this updated Pod design, each Pod is capable of supporting 4,000 mailboxes of 5 GB in size. Therefore, the complete solution infrastructure with 4 Pods will be capable of supporting up to 16,000 mailboxes.

Figure 3 Attaching PowerVault MD1400 to increase Pod's storage capacity

Reference Implementations of the building block approach

While the Pod design provides a standardized configuration for the server/storage, it is important to architect all other components in the Exchange deployment. You need to adhere to the best practices and recommendations for each component in the tier in the solution infrastructure.

To show how these designs are made, we released a few reference implementations that illustrate deploying the best practices at each tier. These reference implementations not only describe the best practices to be followed for optimal application performance and complete solution high availability, but also illustrate how the solution infrastructure can be designed to adhere to these recommendations.

8,000 Mailbox Solution Deployment

By implementing two Pods, that is four PowerEdge R730xd servers at the local site and two at the remote site, you can support up to 8,000 mailboxes in the deployment.

This reference implementation is based on the 8,000 mailbox site resiliency solution.

Figure 4 External Data Center Architecture for 8,000 mailboxes

For more information on this reference implementation, read Microsoft Exchange 2013 on Dell PowerEdge R730xd 3.5’’ Chassis.

12,500 Mailbox Solution Deployment

By implementing three Pods, that is six PowerEdge R730xd servers with PowerVault MD1400 at the local site and three at the remote site, you can support up to 12,500 mailboxes in the deployment.

This reference implementation is based on the 12,500 mailbox site resiliency solution.

`

Figure 5 External Data Center Architecture for 12,500 mailboxes

For more information on this reference implementation, read Microsoft Exchange 2013 on Dell PowerEdge R730xd 3.5’’ Chassis and PowerVault MD1400.



[1] At time of writing, 4 TB is the highest capacity available in large form factor NL-SAS drives.

Solving Hard Drive Pain Points with Leading-Edge Solid State Drives

$
0
0

Hard Drive Pains

As you already know, computers require more performance than ever. There are three things that boost computer performance: the processor(s), the memory, and the storage. Processors and memory, being solid-state, have been speeding up every couple of years for decades.  Hard disk drives…not so much.  Their throughput is limited by the nature of their mechanical design – so hard drives are becoming a performance bottleneck. To make matters worse, hard drives also use a lot of power and generate a lot of heat, increasing both power consumption and cooling costs.

Finally, HDDs are more prone to data loss and downtime, mostly because they have moving parts, making them susceptible to shocks from being knocked or dropped.

Organizations spend large amounts of money on client systems, and these problems result in higher capital and operating expenses, as well as user dissatisfaction and lost productivity. They need:

a. Drive technology that increases performance by accessing data very quickly (latency) and moving large amounts of data rapidly (throughput)
b. Drive technology that doesn’t consume much power, doesn’t produce much heat, and doesn’t cause cooling problems – thereby reducing the load on laptop batteries.
c. Drives that are highly reliable and aren’t easy to break.

Solving the Problem

There’s a simple solution to all these pains that’s field-tested, proven in millions of deployments, and widely available. It’s the SSD.

SSD is an acronym for “Solid State Drive.” Unlike hard drives, which spin platters of magnetic media, SSDs have no moving parts. SSDs are typically made up of NAND Flash, which, unlike RAM, stores data on chips for long-periods of time.  Originally, SSD drives were mostly deployed in very expensive PCs but, as costs have come down and both capacity and reliability has significantly improved, they’re seen as an essential component of many desktops, laptops and mobile workstations.

SSDs are the most cost effective way to boost performance.  Dell tests have shown that one Samsung SSD can equal the performance of 900 hard drives.  SSDs also fix power problems -- a typical SSD consumes half the power of a typical hard drive.  SSDs generate almost no heat as well, which reduces power consumption still further. Finally, SSDs help resolve maintenance issues because they’re more durable.

Organizations benefit from SSDs in many ways. Thanks to SSD performance, imaging new clients takes minutes instead of hours. That translates into faster setup and less admin time babysitting installation. SSDs don’t need additional hardware or management tools.  And SSDs, since they don’t have moving parts, are less likely to be damaged by impact, shock, power issues, etc. That means they last longer and are rated for the serviceable life of the notebook – and beyond.

Now, it’s true that on a per gigabyte basis, SSDs are more expensive than HDDs – though costs are decreasing. Although SSDs are more expensive, they excel at IOPS (Input/Output Operations Per Second) so should be deployed where very rapid access to data is the key metric. Typically, rapid data access occurs during boot and during heavy disk access for large files. Though most buyers choose SSD to increase boot times and maximize performance, some buyers also see value in using SSD as a tool to future-proof their deployments.

Use Cases

But since SSDs increase system cost, what are the best use cases?

a. For light use – basic web apps, cloud based applications, other than reducing boot time, SSDs won’t offer much benefit.
b. For general business productivity, SSDs offer noticeable advantages.  Many business users will value faster boot times as well.
c. For computer aided design and scientific computing, as well as graphic design and video editing, SSDs are absolutely essential.
d. But it’s also worth remembering…if any customer wants performance or durability, adding an SSD is the most cost effective upgrade.

Want to learn more?

In conclusion, organizations that deploy PCs ought to be considering widespread SSD adoption. Samsung is the world’s leading supplier of SSDs, as well as the NAND memory used in these storage devices. With cutting edge technology, the company is driving SSD price points and capacities that enable broad market adoption. Popular drive capacities start at 128GB, but 256 and 512GB are becoming increasingly popular. 

For more information about Samsung SSDs, please visit: http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview

For more information about Dell notebooks with SSDs, please visit: http://www.dell.com/learn/us/en/04/sb360/samsung

The Evolution of Solid State Drives for the Enterprise

$
0
0

Now that you have made the decision to adopt Solid State Drives (SSDs) in your data center, choices abound. As with any evolving data center technology, there are options to suit your unique compute environment -- whether your primary concern is cost or performance.

You are certainly familiar with the standard storage interfaces: SATA, SAS and PCIe. However, you may not be familiar with the performance improvements that have occurred over the past few years, and how these recent technological improvements may play a critical role as you determine which SSDs to choose for your environment.

Interface Choices

SATA remains the most prevalent interface choice in the data center, and has undergone a series of revisions that have increased its capabilities. The latest revision (SATA 3.x) supports throughput of 6Gbps, a 2x improvement over earlier versions (which are still prevalent in older systems).  Because current SATA drives are backwards-compatible, they will still function in older systems but will not achieve the performance gains in newer systems supporting SATA 3.x.

Common in many external storage enclosures, the throughput of Serial Attached SCSI (SAS) drives have also increased over time, currently supporting 12Gbps. SAS is popular in enterprise applications because it is dual-ported, enabling redundancy. The SCSI command set used in SAS also provides additional commands suited for enterprise environments. The performance improvements of SAS (along with continued reductions in price) mean that many organizations are now deploying entry-level SSD storage arrays.

The most significant development with SSD interfaces is the introduction of PCIe. Until recently, the use of PCIe for SSDs was a niche application, requiring proprietary drivers.  With the standardization of NVMe (Non-Volatile Memory express), the incredible speeds possible with a PCIe SSD interface have been brought to the mainstream market. The latest version of PCIe enables 8Gbps throughput per lane. An NVMe PCIe SSD is typically equipped to use four lanes, enabling an interface that supports up to 32Gbps. The NVMe protocol also provides more efficient access from the host to the drive, significantly reducing latency compared to SAS or SATA interfaces.

In addition to increased throughput, SSDs support greater capacity than ever (Samsung has a 3.8TB SSD at the time of this writing). However, there may be limitations to the size of drive you can use if your workload requires a minimum IOPs/GB. Denser SSD capacities may result in the IOPs/GB minimum not being met – unless the faster PCIe interface is used.

Performance Comparisons

The chart below offers a comparison of the SATA, SAS, and NVMe PCIe SSDs based on a set of commonly used criteria:

• Sequential reads: A measure of how quickly large files can be read from a device in blocks of data that are adjacent to one another in the drive;
• Sequential writes: A measure of how quickly large files can be written to a device in blocks of data, also adjacent to one another;
• 4K random reads (IOPS): A measure of how quickly multiple small files can be read from a device where the files are 4 kilobyte blocks of data found at random locations on the device;
• 4K random writes (IOPS): A measure of how quickly small files can be written to a device in 4 kilobyte blocks of data to random locations on the device. 


 

SATA SSD

SAS SSD

NVMe PCIe SSD

Interface

6Gbps SATA

12Gbps SAS

32Gbps PCIe

Sequential reads (throughput)

~500MB/s

~1100MB/s

3000MB/s

Sequential writes (throughput)

~365MB/s

765MB/s

1350MB/s

4K random reads (IOPS)

~75,000

~130,000

750,000

4K random writes (IOPS)

~36,000

~100,000

110,000

As noted, the PCIe interface supports the fastest throughput and, therefore, the highest performance numbers.

Samsung was the first to offer a PCIe SSD using the NVMe protocol, enabling customers to achieve historic performance levels. The Dell PowerEdge R920, the first to deploy the Samsung NVMe PCIe SSD (which can write random data in 25 microseconds), features markedly increased performance for big data tasks. This design is significantly faster than HDD-based systems in high performance computing applications. In fact, the performance of a single Samsung NVMe PCIe SSD can provide the random read performance of over 1800 of the fastest hard drives available in the market today. This results in power, cost, as well as space savings in your data center. 

You're now ready to make informed storage decisions. And you want to make them count. As current trends in the SSD arena suggest, you can clearly step on the accelerator by investing in an NMVe PCIe SSD. 

For more information about Samsung SSDs, please visit http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview

For more information about the Dell PowerEdge R920 featuring the Samsung NVMe SSD, please visit: http://www.dell.com/us/business/p/poweredge-r920/pd


Dell PowerEdge Bests HP in SAP HANA Performance using the SAP BW Enhanced Mixed Load (EML) Benchmark

$
0
0

With the publication of two additional world record results, Dell's PowerEdge R730 and R920 servers are once again showing their industry leading performance when running SAP benchmarks. This time, the benchmark is SAP's Business Warehouse Enhanced Mixed Load, or BW-EML Benchmark, which stresses ad-hoc, near real-time querying and reporting on large data warehouses. The key performance indicator of the benchmark is the number of ad-hoc navigation steps per hour for a given number of initial records.

At the 2 billion initial records level, we see the R920 with 137,010 ad-hoc navigation steps per hour, edging out the next highest result of 126,980 ad-hoc steps achieved by the HP ProLiant DL580 Gen8 server. Both database servers have four processors from Intel's Xeon E7-4800 v2 family.

At the 1 billion initial records level, we see the R730 with 148,680 ad-hoc navigation steps / hour, edging out the next highest result of 129,930 ad-hoc steps, achieved by another HP server, the ProLiant DL580 G7. With just two processors from Intel's Xeon E5-2600 v3 family, the R730 is able to top the performance of the HP server with its four processors from the Intel's Xeon E7-4800 family.

As mentioned before, the PowerEdge R730's industry leading performance running SAP benchmarks is nothing new. In September, Dell published a world record two socket SAP SD Two-Tier result of 16,500 benchmark users, surpassing the results published by HP, Cisco, and IBM.

Similarly, the PowerEdge R920 is also already a world record holder on SAP benchmarks, having the world record for four socket performance on the SAP SD Two-Tier benchmark with a result of 25,451 benchmark users.

The two SAP BW-EML results highlighted here, and Dell's world records on the SAP-SD benchmark, show Dell's commitment to world class performance in mission critical applications. More SAP benchmarks are underway, so stay tuned!

Details of these comparisons are summarized in the tables below.

For more information on the SAP SD and BW-EML benchmarks, visit http://global.sap.com/campaigns/benchmark/index.epx.

Windows Server 2008 Service Pack 2 performance on Dell PowerEdge R220

$
0
0

This blog post is originally written by  Tilak Sidduram from Dell Windows Engineering Team.

With less than a year left before Microsoft ends extended support for Windows Server 2003, including R2, most organizations still have some systems that need to be migrated to a newer version of the operating system (OS).  The best candidate to replace Windows Server 2003 is Windows Server 2012 R2, but some organizations might have to opt for Windows Server 2008 due to application or hardware support limitations.  If your organization will be moving to Windows Server 2008 you will want to read this blog; we talk about migrating to Windows Server 2008 SP2 (32-bit performance) on PowerEdge R220 and the various devices that are supported on this server.

Mainstream support for Windows Server 2003 ended in 2010 and the OS has been in extended support, now everyone’s well aware of the impending end of all Microsoft support in July, 2015 for Windows Server 2003. While Microsoft and Dell usually recommend that migrating to the latest shipping server OS, some customers are cautious, with a few needing/wanting to stick to a supported 32-bit operating system.  Windows Server 2008 SP2 was the last Microsoft operating system to be offered in a 32-bit architecture.

A year after Server 2008 was launched, Service Pack 2 was released, and this is the only supported version of the OS. This operating system is built from the same code base as Windows Vista. Therefore, it shares much of the same architecture and functionality of the client OS. It’s important to note that Microsoft Mainstream support for this OS ends in January, 2015, so any migration plans to this OS should be considered to be temporary at best.

Windows Server 2008 is available in different editions, but for the purposes of our bench testing with the PowerEdge R220, we have only considered Windows Server 2008 SP2 32-bit Standard Edition.  Dell does not officially support any version of 2008 SP2 on the PowerEdge R220 platform and does not recommend this OS/server combination for production use.

In this blog, we have outlined the performance of Windows Server 2008 SP2 32-bit with PowerEdge R220 and the various devices that can work on this server/OS combination. We will also describe the different deployment methods that can be used to deploy Windows Server 2008 SP2 on this server and the scope of testing that was performed.

Due to driver and OS compatibility issues with the Intel chipset on this server, the 64-bit variant of this OS is not viable. You should only attempt to deploy the 32-bit.

Listed below are the hardware peripherals supported on PowerEdge R220 that have 32-bit drivers. All of these drivers can be downloaded from the Dell Support website and then installed after the OS is deployed.

  • Intel chipset
  • Dell PERC S110 storage controller
  • Dell PERC H310 & H810 storage controller
  • On-Board SATA controller either in ATA or AHCI mode
  • Broadcom network controllers
  • Intel network controllers
  • Qlogic network controllers
  • Matrox video controller.

Windows Server 2008 SP2 does not contain inbox drivers for any of the below listed devices. Dell PERC S110 and H310 are the two storage controllers that are supported on R220, the drivers for these controllers are not available in Windows Server 2008 SP2 as inbox, so it’s necessary to download the required driver from the Dell support website and provide the driver at the time of OS installation in order to install the OS.

Dell supports different methods of deploying a Windows Server OS. Below is the list of Dell supported deployment methods that can be used to deploy Server 2008 SP2 on the R220. For more details on these, refer to this Dell Knowledge Base article.

  • Deploying Windows Server 2008 SP2 using the operating system installation DVD.
  • Deploying Windows Server 2008 SP2 using Dell Lifecycle Controller
  • Deploying Windows Server 2008 SP2 using Dell Systems Build and Update Utility

Testing coverage and scope:

To make sure we have full coverage and testing performed on this OS, various scenarios were taken into consideration and a proper test plan was put in place. The OS testing mainly covered OS installation using various supported devices like Optical Drive, Recovery media, USC, SBUU, PXE, iSCSI with HDD /RAID combination, using both S110 and H310 controllers.

System functionality testing was performed with Raid mode and non-raid mode configuration.  All listed network controllers were used and device specific tests were performed on those controller. Some of the features that were covered as part of this testing are Wake-On-LAN, ISCSI, Offload and Jumbo frame sizes tests. Firmware and driver update testing was performed to make sure that the devices gets updated successfully to the latest firmware and drivers version and continue to function after the update.

Windows has robust event logging and reporting features. Various tests have been performed to verify the logging and reporting of normal and abnormal events, when an event was triggered in the OS. To emulate real world user scenarios, various workloads specific tests have been executed on the server, at various CPU conditions and these stress tests have been run for a few days.

Most of the listed components on this server were put under various stress workloads and were checked for error, stability and system functionality. On a final note, we covered most of the scenarios to make sure that all listed devices were covered in this testing, however not all possible configurations were tested.

Dell 13th generation PowerEdge serves up both higher performance and lower power consumption

$
0
0

Dell's 13th generation PowerEdge R630 1U rack server, based upon the Intel Xeon processor E5-2600 v3 product family, proves capable of producing 9.5% more work and 19% better overall energy efficiency than a like-configuration of its two-year-old predecessor the PowerEdge R620. Even when idle, the R630 consumed 14% less power thereby saving a substantial 88 KWh of electricity per year.

The energy efficiency of PowerEdge servers, configured just like many enterprise data center customers would, has improved 163x over the past ten years. Given IT customers’ demand for servers that can perform more work while at the same time reducing a data center footprint, electricity use and TCO, Dell makes the engineering investment to provide just that.   

 

The SPECpower_ssj2008 industry-standard benchmark for measuring compute server energy efficiency was used for this comparison. For more information on how SPECpower_ssj008 works, see spec.org/power_ssj2008/

The full white paper can be found here on TechCenter athttp://en.community.dell.com/techcenter/extras/m/white_papers/20440812

 

 

 

Dispelling the myths: Uncovering the truth of SSDs

$
0
0

Thanks to incredible performance, reliability, and decreasing price points, solid state drives (SSDs) are becoming increasingly common in today’s desktops and notebooks. But since SSDs aren’t the most common data storage technology, some myths persist that can make some buyers wary. Let’s explore the truth of SSDs to see when and where they’re a good choice for desktops and notebooks.

Myth #1. “SSDs aren’t as fast as HDDs in some circumstances”

SSDs are really fast. A typical 7200RPM laptop hard disk provides about 200 IOs per second (IOPS). A typical consumer SSD, like the Samsung 850 PRO, provides thousands of IOPS on top of a massive reduction in latency by 10 times or more. In other words, an SSD not only sends more data at a time, it responds to requests much more quickly.

It’s true that HDDs perform well in sequential data access, which refers to accessing large contiguous blocks of data located on adjacent locations of a hard disk platter. But HDD performance on sequential disk access doesn’t outstrip SSD performance – SSDs are faster than HDDs at any task.

What does that mean for a user? On essentially every client application and task, an SSD will outperform a hard drive. Boot times are faster, application launches are faster, I/O intensive tasks are faster, and shutdown is faster. The quickest way to boost client performance is by adding an SSD.

Myth #2. “SSDs can’t be wiped securely”

Data security worries many users, especially businesses and governments because there’s a real risk to misplacing data. During an average notebook’s lifecycle, the system might move from one user to another, be deployed in another country, and then finally be decommissioned when obsolete. In any of these cases, it’s important to securely wipe the drive to avoid data security breaches.

It’s true that HDD wipe utilities can’t be used with SSDs, due to the technology differences. But it’s also true that most vendors supply software to facilitate secure wiping, like Samsung’s Magician. When an ATA Secure Erase (SE) command is issued, the SSD resets all of its storage cells by releasing stored electrons - thus restoring the SSD to its factory default condition. SE will process all storage regions, including the protected service regions of the media.

Myth #3. “SSDs are unreliable”

This is a common misunderstanding. HDDs are a tried and tested technology, while SSDs are newer, so many buyers wonder: are SSDs as dependable as hard drives?

“SSDs wear out.” It’s true, the memory cells used in SSDs have a limited number of read/write cycles before they burn out. However, consumer SSDs are engineered to account for this issue. Technology ensures that the drives wear evenly. Manufacturers also put “spare” memory on the drive (just like having spare tracks on a spinning HDD) that can replace dying or dead cells on a device.

Solid state drives also have wear-leveling, which sends each write to a different cell rather than writing to the same cell again and again. This produces evens wear and extends drive lifespan. Typically, workloads are read-intensive (usually two to three reads per one write) and reads don’t wear the cells on an SSD. Therefore, most application activity has no impact on the SSD’s operational life.

What does that mean in practice? A 256 GB SSD used in a corporate client environment that writes 40GB per day has an expected lifespan of over 16 years – your SSD will outlive the other components of your system.

Going beyond lifespan, SSDs are also more durable and reliable than their HDD counterparts. Hard drives, full of moving parts, are susceptible to damage caused by loss of power or physical impact. Solid State Drives are more robust simply because they don’t have moving parts, and are rated to be four times more shock resistant than their HDD counterparts.

A study by Samsung, Google, and Carnegie Mellon also shows annualized return rates (ARR) for SSDs that are twelve times lower compared to HDDs.

Dispelling the Myths and Next Steps:

Buyers don’t have to be worried about SSDs – they’ve proven their worth in millions of client deployments. It’s fair to say that SSD should be considered for most desktops and notebooks, especially those running performance intensive tasks, those being moved from site to site, or those where data security is an important consideration.

Samsung, the world’s leading supplier of SSDs, offers a broad portfolio covering all of your capacity, form-factor, and interface needs.

For more information, please visit: http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview 

The SSD Advantage: Breaking through the Performance Bottleneck

$
0
0

As virtual environments become more widespread and applications require greater performance, you may find that traditional hard drives are becoming a bottleneck. Shifts toward private clouds, in-memory computing, and highly demanding transactional workloads require leading-edge server performance.

There are three components that can significantly boost server performance: the processor(s), the memory, and the storage. Processors and memory have been speeding up every couple of years for decades. Hard disk drives (HDDs)… not nearly as much.

To circumvent this problem, you could use dozens or even hundreds of HDDs, pooling the performance of many to meet requirements. Using many HDDs will boost performance, but also delivers headaches in terms of costs, maintenance, power consumption, cooling and space constraints.

Servers weren’t designed to hold hundreds of hard drives, so those faced with high-performance workloads and saddled with HDDs have been forced to buy external storage expansion, driving up costs. Hard drives also use a lot of power and generate considerable heat, increasing power consumption and cooling costs. Finally, adding dozens of hard drives decreases reliability and increases management complexity.

Rethinking the Status Quo

There is an easy fix for these problems. Clearly, you will benefit from storage technology that can access data almost immediately (improved latency) and move large amounts of data quickly (increased throughput). You need storage that won’t consume much power, doesn’t produce much heat, and doesn’t cause cooling problems. Furthermore, you will greatly benefit from storage that’s highly reliable and doesn’t require extra management.

Not surprisingly, many companies like yours are now deploying flash-based, solid state drives (SSDs) that provide significantly faster access to data, resulting in increased performance and lower latency.

Unlike hard drives, which spin platters of magnetic media, SSDs have no moving parts. SSDs are typically made with NAND Flash, which, unlike RAM, stores data on chips for long periods of time. They were designed around enterprise application I/O requirements and their primary attributes are performance and reliability.

An SSD is probably the most cost-effective way to boost server performance. SSDs also fix power problems. A Samsung SSD, for example, consumes half the power of a typical hard drive. Further, SSDs generate almost no heat, and since one SSD can provide the performance of many HDDs, SSDs also can help solve issues with electricity consumption and cooling in data centers.

Moreover, SSDs help to minimize maintenance issues as they’re considerably more reliable than hard drives. They go into servers and storage arrays without additional hardware or management tools. And by sharply reducing drive counts, SSDs reduce complexity.

Demonstrating a Performance Delta

To give you a sense of the performance opportunities provided by SSDs, consider a recent analysis completed by Principled Technologies. Two PowerEdge R920 servers running Oracle with an OLTP TPC-C-like workload were tested. The first was configured with standard SAS hard drives, the second with Samsung NVMe PCIe SSDs. The performance delta between the two was quite significant.

While the performance of the PowerEdge server with the HDD configuration was good, the upgraded configuration with PCIe SSDs delivered 14.9x the database performance of its peer (meaning that it could complete nearly 15 times as much “work” as the standard configuration). This was accomplished with only one-third the number of total drives (8 SSDs vs. 24 HDDs).

Unquestionably, SSDs deliver the best cost benefit for highly-intensive workloads, including transaction processing and data warehousing. In other cases, like virtual desktop infrastructure (VDI) or high-performance computing (HPC), SSDs are also ideal.

Typically, when a workload requires very high capacity, a large number of high-capacity HDDs may make more economic sense. However, even in these situations, many servers now include a few SSDs to maximize boot, swap file and random access performance, while using HDDs for capacity optimization. Dell servers support use of an SSD and a HDD in the same chassis, at the same time.

Clear-Cut Benefits Drive Deployment

Samsung SSDs on Dell PowerEdge servers clearly benefit data center administrators and end-users. Buyers benefit by driving down acquisition and operating costs, while gaining more from reduced complexity and increased reliability.

But the major benefit comes from performance. Administrators will see immediate boosts in performance by deploying SSDs. They’ll be able to embrace performance-intensive workloads that were impossible to run on hard drives. End-users will experience better performance and increased uptime for current as well as new IT services.

The deployment of SSDs in enterprise environments is rapidly accelerating because the benefits are so clear-cut. No other technology has a better potential to literally transform your server (and data center) experience.

For more information about Samsung SSDs, please visit: http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview 

For more information about the Dell PowerEdge R920 featuring the Samsung NVMe SSD, please visit: http://www.dell.com/us/business/p/poweredge-r920/pd 

Viewing all 1001 articles
Browse latest View live




Latest Images