Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Solid State Drives: An Objective Comparison to HDDs

$
0
0

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

Solid State Drives (SSDs) and hard disk drives (HDDs) do the same basic job: each stores data. They contain operating systems, applications, and store personal files. But each uses fundamentally different technology to accomplish the same tasks.

What's the difference?

To answer the first question, HDDs use rapidly rotating platters of magnetic media to store data. This technology dates back to 1956, and it’s been refined over decades to be the default storage medium for desktops and laptops. On the other hand, SSDs have no moving parts, using integrated circuit as memory to store data. Though tested and proven for more than a decade, SSDs still aren’t as common.

Why would a user choose one over the other? Well, each technology has its advantages depending on the factors that are important to you.

Price: To put it bluntly, in terms of dollars per gigabyte (unit of storage), SSDs are more expensive. As of October 2014, a 500GB SSD retails for $269.00, while a 500GB hard drive retails for $64.99. That translates into $0.53/GB for the solid state drive, and $0.12/GB for the hard drive. Smaller SSD drives are more cost-effective, but for the most part, a price gap remains between SSD and HDD, though SSD prices have declined significantly over the past few years.

Capacity:  SSDs have another perceived challenge. Basically, the more storage capacity, the more data and applications (programs, photos, music, videos, etc.) a PC can hold. At the moment, Samsung offers a range of PC SSDs up to 1TB. But hard drives offer much larger capacities, up to 4TB. Realistically, however, for many business users this capacity discrepancy isn’t an issue because they would struggle to fill a smaller drive, much less a terabyte. It would only become an issue for anyone who handles large numbers of large files, such as heavy multimedia users or graphic designers.

So why choose SSDs?

Performance: Mainly, it comes down to performance. In one test conducted by an independent third party, a Samsung 840 EVO SSD completed a Windows boot cycle in 18 seconds, while the same laptop using a standard 7200RPM hard drive took over a minute (1:11) to perform the same task. Other tasks, include application launches and shutdowns show a similar performance boost. Normal operation also becomes faster so system responsiveness improves. Whatever the use, this extra speed may be the difference between finishing on time and failing to, and it certainly leads to better user satisfaction over time.

Durability: Solid State Drives offer a compelling durability advantage. With no moving parts, a SSD is more likely to preserve data in the event of impacts of physical damage. Most hard drives park their read/write heads when the system is off, but they are flying over the drive platter at hundreds of miles an hour when they are in operation. That basic capability makes SSDs fundamentally valuable for anyone who travels with their laptop or works in an environment with vibration or physical shocks, like a factory floor.

Fragmentation: Because of their rotary recording surfaces, HDD surfaces work best with larger files that are laid down in contiguous blocks. That way, the drive head can start and end its read in one continuous motion. When hard drives start to fill up, large files can become scattered around the disk platter, known as fragmentation. HDDs fragmentation causes system performance degradation over time, while SSDs, using solid state memory, don't have this limitation. Since there's no physical read head, SSD performance doesn’t decay due to fragmentation.

Heat and Power: Because HDDs rely on spinning platters, they consume more power than HDDs, which causes them to emit more heat, resulting in additional cooling needs. For desktops, few people care, but for laptops, which run on batteries, the power consumption difference matters. Some of the most power efficient SSDs are made by Samsung, consuming as little as 0.045W at idle. Also, since SSDs are so much faster, they finish tasks quickly, reducing the time spent at peak power consumption.

Noise: Even the quietest HDD makes noise. Noise comes from the drive spinning or the read arm moving back and forth. Faster hard drives make more noise than slower ones. SSDs make virtually no noise at all, since they have no moving parts.

Longevity: There’s a lot of discussion around longevity, but under average commercial workloads, a typical client SSD will last over 16 years.  That’s because SSDs have adopted a range of maintenance technologies to ensure data integrity, bad block management, error correcting code, and wear leveling. It’s possible that HDDs could have a longer lifespan, but for most users, the longevity difference is a moot point that won’t matter in real life, especially if impact resistance is taken into account.

What’s the conclusion?

Overall, HDDs currently hold an advantage on price and capacity. SSDs work best if speed, ruggedness, power/cooling, noise, or fragmentation (technically also part of speed) are important factors. If it weren't for the price difference, SSDs would be the winner hands down. Regardless, for many users, the benefits of SSDs outweigh the few advantages still held by HDDs.

Samsung, the world’s leading supplier of SSDs, offers a broad portfolio covering all of your capacity, form-factor, and interface needs.

For more information, please visit http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview


SSD in the Array? Today

$
0
0

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

By now, you have recognized that the performance gap between the CPU and hard disk-based storage systems has been increasing significantly every year. Until now, you may have even relied on popular workarounds to reduce this gap (e.g. RAID schemes and/or expensive cache memory). As this technical bulletin points out, traditional HDD storage systems increasingly struggle to meet the I/O performance requirements of data-hungry applications.

At the same time, the cost of flash storage continues to decrease. Moreover, flash storage can now be deployed in much greater capacities than cache, giving you the opportunity to reduce latency and improve overall performance. But what about using flash in an external storage array? With auto-tiering and other advancements, large amounts of flash storage (in the form of solid state drives (SSDs)) is becoming commonplace in the enterprise.

The Increasing Power and Value of SSDs

SSDs with a SAS interface are designed for use within an external storage array. They are primarily designed to handle high volume, write-intensive applications such as online transaction processing (OLTP), mail servers, and other common enterprise applications. Engineered to maximize write endurance, these drives provide long-lasting sustained performance by relying on advanced intelligence that promotes “wear-leveling” and other features required in the data center.

As an example, using industry-leading Samsung 12Gbps SAS SSDs as a second layer of cache in a storage array will help accelerate the transfer of data between a system’s DRAM-based cache and HDDs. This has the effect of reducing latency, as hot data is stored in the higher-performing flash storage cache, thereby reducing the need to read data from slower-moving HDDs. It’s ideal for systems that repeatedly access the same blocks of storage and then quickly change to repeatedly access a different set of blocks, an occurrence we often see with databases that handle online transactions.

Additionally, as the $/GB price of SSDs continues to fall, there are increased opportunities to make use of different drive types within an array. Many storage arrays include auto-tiering, a technology that stores more frequently accessed data on the fastest storage available and less frequently accessed data on slower (but typically higher capacity) storage options.

Maximizing Your Auto-Tiered Configuration

So what is key to maximizing an auto-tiered configuration? It’s knowing how much space you need. A simple way to determine this is to examine the daily amount of “data change” for the servers configured to use auto-tiered storage. If the amount of SSD storage is a little larger than the amount of data that is changing, then all of the most recently used data for any given day will remain within the SSD storage.

On some occasions, you even may consider deploying an SSD-only storage array.  While this will certainly deliver sustained performance improvements across the servers attached to the array, there are other considerations that should play a role in your configuration decisions (e.g., total capacity, cost, data protection, and redundancy requirements).

Knowing that a primary reason for choosing SSDs is performance acceleration, it would be best to first consider the business goals that are driving the decision-making process and how technology can help you achieve those goals. You might also want to refer to Samsung’s Green SSD website pages to see how SSD technology can solve your specific data center needs

Every application in your data center has different I/O requirements.  Understanding these requirements is the starting point for selecting the right SSD strategy. When TCO is a primary driver for storage selection, then flash best suits read-intensive applications with a random I/O pattern. Traditional HDDs may be more suitable for bulk storage that is infrequently accessed. In many cases, a combination of the two will allow you to maximize the advancements in flash technology while best addressing your specific needs and TCO requirements. 

For more information about Samsung SSDs, please visit http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview

Consistent Server Virtual Disk Configuration for Exchange Databases using Dell PowerEdge PowerShell Tools

$
0
0

In an earlier post, we looked at the building block architecture for Microsoft Exchange 2013 deployments using PowerEdge R730xd. In the Pod architecture, each server in the deployment has a standard configuration. Each PowerEdge R730xd server in the reference implementations described in my earlier post has 16 LFF NL-SAS drives that are configured in a specific manner for storing Exchange databases. In the scale-up scenarios, an additional 12 LFF NL-SAS drives are used. This RAID virtual disk configuration must be consistent across all the servers for better manageability.

Configuring servers consistently ensures predictability and reduces variables while troubleshooting deployment issues. System management tools and automation help achieve this consistency with far less effort while reducing human error during solution deployment. While multiple system components must be configured before we complete the solution deployment, for today’s article, we’ll limit the discussion to creation of virtual disks to store Exchange databases. As a reference for this virtual disk configuration, we’ll use the disk layout described in an Exchange 2013 solution implementation on Dell PowerEdge R730xd servers. The following figure provides a high-level overview of the disk layout used in the reference implementation.

Figure 1 RAID LUNs required for Exchange Databases

The Large Form-Factor (LFF) chassis configuration of PowerEdge R730xd contains sixteen 4TB NL-SAS drives. The internal drive tray can host up to four of these drives. This is shown in Figure 1. The LFF drives in the front bay are numbered from 0 to 11 and the LFF drives in the internal drive tray are numbered from 14 to 17. The 2.5-inch SAS drives at the rear are used for deploying OS and these are numbered as disk 12 and 13.

The recommended layout for all RAID disks in the system is shown in Table 1.

Table 1 Recommended RAID layout in PowerEdge R730xd for Exchange deployment

RAID Volume

Disks

RAID Level

OS-Volume

Disk 12 & Disk 13

RAID 1

DB-Volume1

Disk 0 & Disk 1

RAID 1

DB-Volume2

Disk 2 & Disk 3

RAID 1

DB-Volume3

Disk 4 & Disk 5

RAID 1

DB-Volume4

Disk 6 & Disk 7

RAID 1

DB-Volume5

Disk 8 & Disk 9

RAID 1

DB-Volume6

Disk 10 & Disk 11

RAID 1

DB-Volume7

Disk 14 & Disk 15

RAID 1

Restore-Volume

Disk 16

RAID 0

Global-Hot-Spare

Disk 17

RAID 0

Apart from the RAID layout, the block size for these virtual disks should be set to 512.

To achieve consistent virtual disk configuration across servers, the RAID layout described in Table 1 must be translated into a re-usable template. This is where Dell’s systems management tools can help. The WS-Management interfaces on iDRAC help you export a system or component configuration as a Server Configuration Profile (represented as an XML file) and restore or import the configuration on a different system.

To access WS-Management interfaces using Windows PowerShell, you need to understand various Common Information Model (CIM) profiles available on iDRAC, identify the right CIM classes and methods to perform RAID device configuration, and finally use those interfaces in PowerShell through CIM Cmdlets. This is not easy to do without knowing how to use CIM profiles and PowerShell CIM cmdlets. The systems management team at Dell recently announced an experimental release of PowerShell Tools for Dell PowerEdge Servers. Using the cmdlets in this experimental release, you can export component configurations and import them on a different server is as easily as using any other PowerShell cmdlet.

The two cmdlets that we will explore in today’s article are Export-PEServerConfigurationProfile and Import-PEServerConfigurationProfile. The first cmdlet exports the component configuration to a network share and the later imports it to deploy the Server Configuration Profile.

Before you use these cmdlets,

Starting with PowerShell 3.0, if you have not disabled auto module loading, the cmdlets from any available module can be directly accessed without explicitly importing the module using the Import-Module cmdlet.

Using the Get-Help cmdlet, you can see a list of examples on how to use the Export and Import System Configuration Profile cmdlets.

Get-Help Export-PEServerConfigurationProfile

You also can see a list of examples about how to use these Cmdlets by adding the –Examples switch parameter with the Get-Help cmdlet. Each of these cmdlets requires a CIM session created to the iDRAC of the server we’re going to manage. This is done using the Get-PEDRACSession cmdlet.

#Credentials for accessing iDRAC

$DRACCredential = Get-Credential

#Create a CIM session

$DRACSession = New-PEDRACSession -IPAddress 10.94.214.63 -Credential $DRACCredential

Once we have the DRAC session created, we can use the Export-PEServerConfigurationProfile cmdlet to export existing virtual disk configuration to an XML file. When using this cmdlet, we need to provide the NFS or CIFS share details along with the credentials required, if any, to access to the share. The Get-PEConfigurationShare cmdlet provides a method to construct a custom PowerShell object representing a CIFS or NFS share. Optionally, using –Validate parameter, you can also test if the network share is accessible from iDRAC or not.

#Share object creation for re-use

$ShareCredential = Get-Credential -Message 'Enter credentials for share authentication'

#-Validate switch ensures that the share is accessible from iDRAC.

$Share = Get-PEConfigurationShare -iDRACSession $DRACSession -IPAddress 10.94.214.100 -ShareName Config -ShareType CIFS -Credential $ShareCredential -Validate

Finally, the Export-PEServerConfigurationProfile cmdlet can be run to export the component configuration as XML.

#Export Configuration of RAID controller

Export-PEServerConfigurationProfile -ShareObject $Share -iDRACSession $DRACSession -FileName VD.xml -Target 'RAID.Integrated.1-1' -ExportUse Clone -Wait

In the above command, observe that we’re exporting the configuration of only integrated RAID controller. By default, without specifying –Target parameter, the complete system configuration will be exported.

You will notice in the exported configuration file that some of the attributes are commented. Importing a Server Configuration Profile on a target system that is already configured can have destructive implications--which is why some configurations are commented. Therefore, to be able to import this system on a target system, we need to uncomment these lines in the XML.

Now that you understand how the export works, using the Import-PEServerConfigurationProfile cmdlet is easy. Before we can perform the actual import, we can preview if the configuration specified in the XML can actually be deployed on a target system or not. This is done using –Preview switch parameter.

#Create a CIM session

$DRACSession = New-PEDRACSession -IPAddress 10.94.214.64 -Credential $DRACCredential

#Preview a configuration XML import

Import-PEServerConfigurationProfile -ShareObject $Share -iDRACSession $DRACSession -FileName VDGOLD.xml -Preview -Wait

If you do not see any errors after a preview, the XML configuration for the integrated RAID controller can be deployed successfully on the target system. So, to perform the deployment, we just need to remove the –Preview switch and run the Import-PEServerConfigurationProfile cmdlet again.

#Preview a configuration XML import

Import-PEServerConfigurationProfile -ShareObject $Share -iDRACSession $DRACSession -FileName VDGOLD.xml -Wait

The –Wait switch parameter ensures that a progress bar is shown to indicate what tasks are being performed while importing the Server Configuration Profile. Once you import the configuration, the target system gets rebooted and you will see that the RAID disks get created on the integrate RAID controller.

This method of deployment is error-free and repeatable. For a system administrator, this method also provides a scripting mechanism that can be used as a part of much bigger data center automation framework.

Advantages of the DDR4 memory technology in 13th generation PowerEdge servers

$
0
0

The 13th generation PowerEdge server family, based on the Intel E5-2600 v3 family CPU, implements the new DDR4 system memory standard which offers advantages over 12G's DDR3 including faster speeds, increased bandwidth, higher density, more energy efficient and with improved reliability.

 Let's consider each of these advantages in more detail.

- Faster speeds: By 14% when populated at 1 and 2 DIMM per channel (DPC). This goes up to 40% at 3DPC. These frequencies will to scale up to 2400 MHz in 2015.


- Increased Bandwidth: That speed increase translates to higher throughput as this comparison using the popular STREAM memory bandwidth benchmark illustrates.

 

 To put actual quantities to these relative improvement percentages; here are measurements from PowerEdge models R630 and R620. Note these are with just a single processor installed. Total system bandwidth of a 13G launch platform has been measured at 120 GB/s!

 

 - Higher Density: DDR4 offers scalability for the future: Individual DIMM densities start at 4GB and go up thru 32GB today with 64GB availability in 2015. Imagine the 1U model with 1.5 TB or the 4U model with 6 TB for virtualization and big data applications. Here's how the mainstream DIMM size has been trending.

 

  - More efficient: With energy efficiency increasingly important with each new CPU/GPU architecture, regulatory certification requirements and server buying decisions; DDR4 enables just that as you can see here.


And so DDR4 system deployment will save not just electricity, but data center provisioning like backup power, cooling and physical space.

Even in a memory-rich PowerEdge T630 configuration running the especially intensive Linpack benchmark, we found that its DDR4 memory subsystem now accounted for just 4% of the overall system input power requirement. And again, this was while that memory sustained 16% better throughput than DDR3 could in the comparably configured 12G T620.


 - More reliable: DDR4 implements real-time write error detection/correction to its internal command and address busses, not just the data bus that DDR3 was limited to. DDR4 also adds built-in thermal monitoring with the provision to adjust its access timings should a temperature excursion occur.

Thanks for reading. In a future installment I'll talk on other 13G PowerEdge memory performance aspects including: access latency, Performance vs Lock Step memory configuration, RDIMM vs LRDIMM, Node Interleaving (NUMA vs UMA), 1x vs 2x refresh and CPU Snoop mode sensitivity. 

 

 

Making the Best SSD Purchasing Decision

$
0
0

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

Effective Upgrades to Boost Performance

Let’s face it, every computer user complains about performance. It’s an unfortunate fact of life, but over time, computers become slower. New applications, filling hard drives with data, and changes to operating systems inevitably lead to slowdowns. That causes user frustration and reduces useful system life expectancy.

So what’s the best way to boost client performance? Basically, it’s making the right component choices up front when purchasing a system. Making good choices at time of purchase boosts user satisfaction and useful system life. But what are the best choices?

When buying a desktop or notebook, there are three main component choices that effect performance. These are the processor, the RAM, and the storage. Let’s explore each in turn.

Processor -- This upgrade is most useful for those doing processor-intensive tasks that make you wait—like image manipulation, video or audio encoding, CAD/CAM, or scientific computing. Today’s multi-core processors streamline multitasking, especially when these intensive processes are involved. Faster processors can also help boost gaming. But for general business productivity, email, web surfing, a processor upgrade won’t help very much. Modern processors have no problems running current operating systems or applications without being bogged down.

RAM -- While RAM is an easy upgrade you can make, PC systems are limited in how much memory they can support. Beyond a certain amount used by most applications, having extra RAM will not be as cost-effective as upgrading your storage to get improved performance.

Storage -- Upgrading to a solid state drive (SSD) is one of the best choices you can make in terms of general speed boosts. An SSD can speed up your boot time and the launching of applications, as well as boosting shutdown speeds.

Upgrading your storage offers application-specific benefits as well. For example, with widespread use of high-definition video capture devices (such as those found on smart phones), more people are editing video on their computers than ever before. The speed advantage SSDs have over HDDs is critical here. Users will experience better responsiveness and changes taking effect more quickly. SSDs minimize the amount of time video users spend waiting and maximizing time spent creating.

Video games also benefit tremendously from SSDs. Unlike word processors or spreadsheets, which need to only load themselves and a document, when video games are launched they need to load the engines that power them and the graphics to display to the user. Depending on the complexity of the game, this can take anywhere from a handful of seconds to a minute or more. New levels or missions then need to be loaded as play progresses – and the longer the delay experienced by the player, the less satisfying their gameplay experience is.

Another type of application that benefits from an SSD is anything that relies on a database to function, and one type of software with a database most users encounter are local mail clients such as Microsoft Outlook. For users who never delete their emails and have tens of thousands of messages stored locally, the SSD will cause Outlook to not only load itself faster, but make all the messages it contains available to view and act on more quickly as well.

To conclude, choosing an SSD such as one made by Samsung Semiconductor, the most widely recognized SSD manufacturer in the world, is likely to provide the best opportunity to enhance system performance.  Popular SSD capacities offered for notebooks include 128GB, 256GB, and 512GB. 

For more information about Samsung SSDs, please visit: http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview

For more information about Dell notebooks with SSDs, please visit: http://www.dell.com/learn/us/en/04/sb360/samsung

NVMe and You

$
0
0

Author: Tien Shiah, Samsung Product Marketing Manager – SSD

Tien brings more than 15 years of product marketing experience from the semiconductor and storage industries.  He has an MBA from McGill University and an Electrical Engineering degree from the University of British Columbia.

In the article, “The Evolution of Solid State Drives for the Enterprise,” we presented a brief overview of the interface choices available for Solid State Drives (SSDs). Of the three standard choices available, SATA, SAS, and PCIe, it was the PCIe interface that clearly supported the fastest throughput and highest performance numbers. Given the data-intensive requirements and increased performance needs of applications today, it appears certain that more and more organizations will be turning to PCIe SSDs to meet their enterprise storage needs.

The Dawn of NVMe

Within the category of PCI Express (PCIe)-based SSDs, there have been several advancements that have had a significant impact on capacity and performance levels in just the last year.  While it is widely understood that PCIe SSDs utilize non-volatile NAND memory (i.e., that the data held by the drive is persistent and will not be lost in the event that power is abruptly terminated to the drive), not all PCIe SSDs used a standard set of drivers or feature sets - the result being inconsistent drive performance by manufacturer, incompatibility in some systems, etc. 

Dell and Samsung Semiconductor, along with a consortium of over 90 other companies, sought to remove the differences in available non-volatile memory (NVM) drives.  The result was a standard specification called NVM Express, or NVMe. Samsung was the first to introduce an NVMe drive to the market, providing a standardized, high-performance solution that was previously only available via expensive, custom solutions. As outlined in detail here, NVMe standard drives exploit the full potential of non-volatile memory while incorporating a feature set required by enterprise and client systems. They also extend the evolving trend of SSDs to reduce I/O latency while driving up overall performance. In fact, upon launch, NVMe drives boasted a 50%+ reduction in latency when compared to the already solid performance of SCSI/SAS SSDs. 

NVMe in Your Environment

Generally speaking, PCIe SSDs are ideal in environments where cache performance is critical. Data that is in high demand is held in cache, reducing the time needed to access that data, lowering latency, and improving application performance. Similarly, these flash drives accelerate log file writes, also resulting in increased application acceleration.

As such, customers that have broad OLTP or OLAP requirements will realize immediate performance improvements in their environment with PCIe SSDs from Samsung. Response times are reduced, transactions per second increase, and the maximum number of concurrent users goes up. This is particularly apparent in environments where there is a need to reduce the differences in performance between storage and the central processor, a need to add high-speed caching to an existing HDD tier, or an overall need to accelerate the performance of mission critical applications while maintaining the highest levels of data integrity.

When you add in the benefits associated with the NVMe standard, you can achieve performance gains across multiple cores to access critical data, enjoy scalability with headroom for current (and future) non-volatile memory performance, and leverage end-to-end data protection capabilities. As you consider your choice of SSD interfaces in your environment, you can refer to a growing list of benefits that can be achieved by deploying NVMe PCIe SSD. Dell PowerEdge Express Flash NVMe PCIe SSDs, powered by Samsung NVMe SSD technology:

• Have twice the performance of previous generational PCIe SSD devices
• Are front-access, hot-plug 2.5-inch PCIe SSD devices that support the ability to “hot-add” additional devices without the need to insert cards into PCIe slots that require taking a server offline
• Utilize device-based flash management, reducing overhead costs
• Support a wide range of applications including OLTP, OLAP, collaborative environments, and virtualization (where there is random access to data versus sequential access for reads and writes)
• Provide lower latency of data

To learn more about Dell PowerEdge Express Flash NVMe PCIe SSDs, please visit: http://i.dell.com/sites/doccontent/shared-content/data-sheets/en/Documents/Dell_PowerEdge_Express_Flash_NVMe_PCIe_SSD_Spec_Sheet.pdf

To lean more about Samsung SSDs, please visit: http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview

Dell PowerEdge 13G Servers now support UEFI Secure Boot

$
0
0

A guide to enabling and deploying on Windows 2012, CentOS/RHEL 7, and SLES 12.

Authors: Thomas Cantwell, Charles Rose, Aditi Satam, Michael Schroeder 

Dell OpenManage Plug-in Version 1.0 for Nagios Core is now available

$
0
0

Author: Vishwanath S Patil

 

The new Dell OpenManage Plug-in version 1.0 for Nagios Core version 3.5.0 and later provides a relatively easy way for current or prospective Dell customers who use Nagios Core software to monitor Dell PowerEdge servers. The plug-in provides easy integration to the Nagios Core and monitors the Dell servers through an agent-free, out-of-band technology via Integrated Dell Remote Access Controller (iDRAC) with Lifecycle Controller.

 

The plug-in equips IT administrators with the information necessary to make rapid yet informed decisions regarding their data center resources and helps to increase staff productivity by reducing unplanned downtime. This brings increased ease of use and continued investment protection for Nagios Core customers.

 

This release supports discovery and monitoring of 12th and later generation of Dell PowerEdge servers only. This plug-in provides capability to monitor server overall health, server component-level health and SNMP alerts generated from the devices. It also supports one-on-one iDRAC web console launch to perform further troubleshooting, configuration or management activities.

 

Key Dell OpenManage Plug-in Features:

 

Feature

Function

Benefit

Discovery and Monitoring

Discovers and Monitors Dell servers including component health (e.g. Physical disk, Virtual disk, Fan, Battery, NIC, Intrusion etc.) in Nagios Core Console

 

Enables monitoring the health of Dell servers periodically from a single interface for quicker fault detection and resolution

 

Event Monitoring

Monitor SNMP traps from supported devices

Enables monitoring the health of Dell servers as and when problem occurs.

Server Information

Provide basic server information including component-level details

Gives holistic view of server infrastructure and enables further troubleshooting and escalating to the support team

1:1 iDRAC Web Console Launch

Link and launch of one-to-one iDRAC console from Nagios

Enables further troubleshooting and one-to-one configuration, update, or management of Dell servers.


For more information, download links, and product documentation please visit the Dell OpenManage Plug-in for Nagios Core wiki page. We encourage you to continue this conversation in the OpenManage Connections for 3rd Party Console Integration Forum if you have any comments or other feedback.


Announcing the release of biosdevname 0.6.1

$
0
0

(Posted on behalf of Jordan Hargrave.)

Biosdevname is a utility used to help map Linux Ethernet devices to their physical location in a system. Traditionally in Linux, ethernet devices have been named ethX, which may or may not correspond to the Lan-on-Motherboard (LOM) numbering in the system. Biosdevname is a udev helper application that maps the original ethX names to location-specific names. This is useful when deploying multiple servers that require all ethernet names to be the same. The names are of the form eX_vf for embedded slots and pXpY_vf for NICs in PCI slots.


The latest release of biosdevname 0.6.1 fixes issues on some cards that support multiple devices per PCI Bus:Device:Function and merges in some upstream changes from RedHat.

This version will be available in future OS releases from Red Hat, SUSE and Ubuntu.

The latest version of biosdevname is available from http://linux.dell.com/biosdevname or the git repository: http://linux.dell.com/git/biosdevname.git

Installing Debian 8 on a Dell XPS 13

$
0
0

This is a step-by-step installation guide written by Eric Mill (@konklone) and Paul Tagliamonte (@paultag) showing how to install Debian 8 on a Dell XPS 13 notebook:

You will need:

    • A USB drive** with at least 1 GB of space.
    • A USB network connection.** Like a smartphone that can tether a WiFi connection over USB, or a USB WiFi stick.
    • You Will Also Need A Computer

I am using a Dell XPS 13 purchased through the Dell Ubuntu program: https://www.dell.com/ubuntu

Dell calls their program Project Sputnik (https://sputnik.github.io/), and it is managed by a friendly team of Linux engineers inside Dell who partner with Ubuntu (http://partners.ubuntu.com/dell) to ensure that your computer will Just Work with Linux.

The Dell XPS 13* is also used by multiple Debian team members, so your pain points will be theirs, and they're likely to quickly fix things.

Supporting Dell's program is a wonderful thing to do, and it's also just a great laptop.*

 *

You Will Also Need Debian

Debian has `unstable`, `testing`, and `stable` versions:

    •  `unstable` is the most fresh: the edge.
    •  `testing` runs 10 days behind `unstable`, and is probably what you want.
    •  `stable` is (deliberately) quite out of date as it waits for software to prove itself, and because of that is extremely stable. It runs underwater robots.


This guide uses the latest beta image, on the `testing` channel. As of writing, this was: http://cdimage.debian.org/cdimage/jessie_di_beta_2/amd64/iso-cd/debian-jessie-DI-b2-amd64-netinst.iso 

Please visit GitHub to read Eric's excellent guide in it's entirety:https://github.com/konklone/debian/blob/master/installing.md

* minor edits from original document

Whitepaper on SUSE Linux Enterprise Server 11 SP3 cluster configuration for Dell PowerEdge VRTX with NFS storage

$
0
0

I have been working together with our RAID storage team on some guides for configuring HA clustering on Linux. The first whitepaper we published was on clustering on VRTX with RHEL. This is a whitepaper on clustering with SLES 11 SP3. It is available at:

http://en.community.dell.com/techcenter/extras/m/white_papers/20440909

Please share any feedback on the linux-poweredge mailing list. If you are not already on the list, you can subscribe at <https://lists.us.dell.com/mailman/listinfo/linux-poweredge/>.

You asked for it: Ubuntu officially on the Precision M3800 worldwide

$
0
0
A year ago, I published a blog post about running Ubuntu on the Precision M3800. Being involved in Project Sputnik and a life-long Linux user, I knew that this is a system Linux users would love, so I thought I'd make their job of getting the M3800 up and running a little easier. Since then, that blog post has seen nearly 40,000 views, with a follow-up post to it receiving 10,000 views. Since my blog post, aside from the view count, we've received an overwhelming response from the community desiring an officially supported product. With the momentum we've built, I'm proud to say that the Precision M3800 is joining the XPS 13 Developer Edition as its official big brother.(read more)

Dell KACE K1000 provides anypoint management to tame network device proliferation

$
0
0
IT environments are becoming increasingly more diverse and complex, and consequently harder to manage. Mobility, along with more and more “smart” devices (i.e. the Internet of Things) has led to a significant increase in the number and types...(read more)

VMware vSphere 5.1 Update 3 Release from Dell

$
0
0

This blog post is authored by Vijay Kumar TS, Project Manager Dell Hypervisor Engineering

Summary

  • VMware vSphere (ESXi) 5.1 U3 is an update release from VMware to VMware vSphere 5.1 branch. This release primarily includes bug fixes and security updates from VMware. Please refer to the VMware release notes that describes the updates and fixes in this release.
  • All the Dell hardware (PowerEdge Servers, Storage and Peripherals) certified for previous 5.X release would be listed as supported for VMware vSphere 5.1 U3.
  • Dell declared support for this release along with VMware GA ( General Availability ) which is 11th December 2014. The updated Dell customized VMware vSphere image for 5.1 U3 is posted at support.dell.com only and this release would not be offered from factory ( No Factory Install ).
  • The Dell support documentation for this release are updated and made available at VMware GA. Please refer to Dell Support Documentation for VMware @  manuals page

What's new in VMware vSphere 5.1 Update 3

Please refer to the release notes that describes the updates in this release.

 

Dell VMware Support documents

$
0
0

This blog post is authored by Vijay Kumar TS, Project Manager Dell Hypervisor Engineering.

Did you know – Dell VMware supports documents are posted at VMware GA and these documents gives you elaborative information on getting started with VMware in Dell PowerEdge servers. These documents are updated periodically to help you with the latest changes, following are the documents which are posted at support.dell.com

1.    VMware vSphere On Dell PowerEdge Systems Deployment Guide: This document helps you to deploy VMware ESXi on Dell PowerEdge servers and provides specific information about recommended configurations, best practices, and additional resources

2.    Dell PowerEdge Systems Running VMware vSphere Getting Started Guide: This document helps you in setting up your Dell PowerEdge system running VMware vSphere for the first time

3.    VMware vMotion and 64-bit Virtual Machine Support For Dell PowerEdge Systems: This document helps to find the compatibility and support for Dell PowerEdge Systems w.r.t VMware vMotion

4.    VMware vSphere on Dell PowerEdge and Storage Systems: This document provides information about Dell-supported systems and Dell storage arrays compatible with VMware ESXi

5.    Dell VMware ESXi Image Customization Information: This document provides information about the Dell Customized image from the base VMware image

6.    Upgrade Guide to ESXi Using Dell Customized Image: This document provides information for upgrading the Existing OS using Dell Customized image

7.    VMware Virtual SAN Product Information Guide: The objective of this document is to assist in the deployment of VMware® Virtual SAN on Dell PowerEdge Servers

8.    VMware vSphere On Dell PowerEdge Systems - Release Notes: This document provides information on all the Known Issues with respect to VMware ESXi & Dell PowerEdge Servers.


Anypoint Systems Management e-Book – The Systems Management Imperative

$
0
0

Remember a few years ago when you had settled into your systems management duties and thought you had everything (or at least most things) under control?

Those were the days.

Sure, a new OS or application upgrade would come along and you’d have to learn how to manage it correctly. But you knew most of what was on the way, and since everyone used the PCs and applications you provided, you could keep things on a short leash, relatively well managed and secure.

Then came the advent of mobile devices and BYOD, with co-workers from the loading dock to the C-suite asking you, “My new mobile device is really cool. How can I run it on our network so I can be more productive?”

That’s when your IT world started changing fast. More types of devices, more OSes and The Big Difference: the absolute need to track, manage and secure a new world of devices owned by the organization, devices owned by the user (BYOD) and new network-connected devices no one could have imagined.

New imperative, new e-book

Many of our customers have discovered that BYOD, mobility and the coming Internet of Things (IoT) are throwing them curves. The systems management products they’ve relied on for PCs and servers were not designed for identifying, managing and especially securing the the variety of devices their co-workers now rely on for anywhere-anytime productivity. Still, they face the Systems Management Imperative, which dictates that they know all the users and devices on the network and can keep them secure.

We’ve put together “A Single Approach to Anypoint Systems Management,” a new e-book for you IT administrators who are looking at a rising tide of mobile devices, multiple operating systems and intelligent endpoints and wondering how to bring them under your overall systems management umbrella. The e-book is your guide to integrated systems management that extends to mobile, BYO devices and the Internet of Things.

I’ll describe the e-book in more detail over my next few posts. Meanwhile, you can read Part 1 right now and start thinking about integrated systems management in your own organization.

Anypoint Systems Management E-Book – A Day in the Life of Eddie Endpoint

$
0
0

In my last post , I described some of the effects of mobility, BYOD and the Internet of Things on enterprise IT and pointed you to our new e-book, “A Single Approach to Anypoint Systems Management .” This time I’ll focus on what goes into systems management and how mobility and BYOD affect the workload of IT administrators.

Eddie Endpoint and the Systems Management Basics

No, that’s not a garage band. Eddie Endpoint is the IT manager who looks at every problem through the lens of systems management. Like you, he is on the horns of the BYOD/mobility dilemma: how to allow users to choose and bring the devices that make them most productive, while maintaining control and security on his network. And now he’s facing the Internet of Things and the management of devices he may never have considered.

Eddie cut his teeth deploying, maintaining, updating, securing and migrating endpoints — most frequently PCs, Macs and servers. More recently, Eddie has begun to perform those same duties on new kinds of endpoints — tablets, smart phones, user-owned devices and intelligent devices connected to his network.

He may track hardware and software manually using spreadsheets and run other functions like help desk and patching through one-off products. He may deploy and manage Macs using yet another application. On top of running tools for those tasks, Eddie must now provide mobile users the access to his company’s applications while ensuring data and access for BYO devices remain secure.

When it’s PCs, Macs and servers, Eddie has most of the processes and tools he needs, but for BYO, mobile and other network-connected devices, he’s faced with using a variety of solutions that usually do not play well with his other systems management tools.

Because of mobility, BYOD and the Systems Management Imperative of ensuring that all devices run securely on his organization’s network, Eddie’s endpoint-world is now becoming an anypoint-world.

Do you feel Eddie’s pain? Are you as eager as he is for integrated management of all the devices in your organization? Stay tuned for more in my next post. Meanwhile, read Part 1 of our new e-book right now.

Matching data requirements with flexible storage

$
0
0

By Gregory Kincade:

As data needs continue to grow, the latest direct attach storage from Dell helps seamlessly expand the capacity of 13th-generation Dell PowerEdge servers.

Today’s enterprises are confronted by an explosive growth of business-critical data, fueled by the prolific use of mobile devices, cloud and social media. No longer consisting of only predictable, structured formats such as database tables, data now includes diverse combinations of email, video, audio, spreadsheets, medical records and other semi-structured and unstructured formats.

The shrinking IT budgets of many small and midsize organizations simply cannot keep pace with the growth and complexity of this data. Yet to remain competitive, these organizations must maintain the reliability, availability and security of their data. They need cost-effective, scalable enterprise storage that can handle both the expanding volumes and the increasing diversity of data. Moreover, the storage solution should include a simple-to-manage, powerful storage platform that supports applications and services requiring an efficient, fully redundant infrastructure to maintain high availability.

Next generation of direct attach storage

Direct attach storage (DAS) is well suited for situations in which throughput and price are the most important considerations and the advanced features of network attached storage (NAS) or storage area networks (SANs) are not required. DAS offers performance benefits over NAS and SAN because the server does not need to cross the network for reads and writes. As a result, DAS can efficiently support demanding applications such as email servers and audio and video streaming.

In addition, DAS is designed to be simpler to deploy and maintain than NAS or SAN. NAS and SAN require network planning and the acquisition and installation of switches, routers, cabling and connections. By avoiding these costs and complexities, DAS remains a viable choice for small businesses or for branch offices. Consequently, for many remote and small office deployments, DAS usually enables greater throughput and can be more cost-effective and easier to configure, compared to NAS and SAN.

To deliver substantial, scalable storage capacity, a RAID array can be appended to a daisy chain of DAS units. Enterprise RAID solutions are designed to offer much of the fault tolerance, availability and data protection provided by NAS and SAN with features such as disk, controller and cooling redundancy. When anchored by a well-designed RAID array, DAS can provide a solution that is easy to own and manage. Nevertheless, pure DAS solutions traditionally were not deemed viable when storage needed to be shared or virtualized and scaled.

Now, scalable and dependable storage solutions that incorporate DAS are ready for the spotlight. In concert with innovative 13th-generation Dell PowerEdge servers, Dell has released its next generation of DAS enclosures: the Dell Storage MD1400 and the Dell Storage MD1420 12 Gbps Serial Attached SCSI (SAS) storage enclosures. The MD14xx models are designed to provide storage expansion for the latest PowerEdge servers using the Dell PowerEdge RAID Controller 9 (PERC9) H830 adapter or the Dell 12 Gbps SAS host bus adapter (HBA).

Enhanced reliability and fault tolerance through RAID

The PowerEdge R730xd, PERC9 H830 adapter and MD14xx combine to form an excellent platform for robust Microsoft® Exchange 2013 mailbox servers and storage for Exchange 2013 mailbox databases and transactional logs. The MD14xx is designed to support the resilience of the Exchange solution by eliminating single points of failure with hot-swappable redundant fans, power supplies and Enclosure Management Modules (EMMs).

The PERC9 H830 adapter, which enables management software to recognize connected storage as a single unit, can detect and use redundant paths to drives contained in MD14xx enclosures. Two SAS cables can be connected between the adapter and an enclosure for I/O path redundancy (see figure).


Fig. 1

Redundant path configuration: PERC9 H830 adapter and Dell Storage MD14xx enclosures

Path redundancy enables the PERC9 H830 adapter to withstand failure of a cable or an EMM by switching over to the remaining operational path. When redundant paths exist, the controller automatically balances the I/O load through both paths to each disk drive. This load-balancing feature, which is automatically turned on when redundant paths are detected, helps increase throughput to each drive.

The PERC9 H830 adapter supports the Exchange 2013 recommended RAID levels — RAID-1, RAID-5 and RAID-10 — for system partitions, database files volumes and database log files volumes. The controller is designed to automatically detect and rebuild failed physical disks when a new drive is placed in the slot where the failed drive resided. In addition, if a configured hot spare is present, the controller automatically and transparently tries to use it to rebuild failed physical disks.

If drive security is a priority, the option to use self-encrypted drives (SEDs) with drive-level encryption helps ensure data is secure, even if the drive is removed. Additionally, SEDs support the instant secure erase feature, which is designed to permanently remove data when repurposing or decommissioning drives.

MD14xx enclosures can be managed easily through Dell OpenManage systems management tools, leveraging the Integrated Dell Remote Access Controller (iDRAC) to help simplify and automate essential management tasks across server, storage and networking platforms in multi-hypervisor and OS environments.

The MD14xx models offer the flexibility to mix and scale a variety of nearline SAS, SAS, SED and solid-state drive (SSD) formats within a single enclosure. Up to eight MD14xx enclosures — four per PERC9 port — can be connected to each PowerEdge server, helping simplify the designer’s task of customizing an Exchange 2013 storage solution to fit the desired cost, capacity and IOPS profile.

Cost-effective virtualization through software-defined storage

Software-defined storage (SDS), in which the storage management stack is embedded in the OS, offers a low-cost alternative to traditional RAID storage for efficiently managing and protecting mission-critical data. Microsoft addresses the SDS challenges of building a scalable, resilient and cost-effective storage platform through Storage Spaces, available with the Microsoft® Windows Server® 2012 R2 OS.

Storage Spaces is virtualized storage based on a disk-pooling paradigm. It enables administrators to convert externally connected enclosures of physical disks into storage pools. From the storage pools, administrators can create thinly provisioned virtual disks, which are also known as storage spaces.

Administrators can configure storage pools using Storage Spaces and dynamically add them as needed. This feature not only facilitates storage scalability, but can be used for fault tolerance because administrators can configure spaces with mirroring for clustered Microsoft® Hyper-V® or Microsoft® SQL Server® workloads and parity redundancy for backup and archival workloads. Mirroring allows data corruption to be seamlessly repaired across mirrored volumes.

To support deployments that require an added level of fault tolerance, Storage Spaces can associate each copy of data with a particular Just a Bunch of Disks (JBOD) enclosure. This capability is known as enclosure awareness. With enclosure awareness, if one enclosure fails or goes offline, the data remains available in one or more alternate enclosures. The MD14xx storage enclosures support the SCSI Enclosure Services (SES) necessary for this feature to work properly.

Administrators also can use Storage Spaces with a two-node server cluster for failover, where one or more pools are clustered across multiple nodes within a single cluster. For example, the MD14xx enclosures can be connected to a pair of PowerEdge R730 servers with clustered file server roles; each server has redundant paths via the Dell 12 Gbps SAS HBAs to all the disks in each MD14xx enclosure (see figure).


Fig. 2 

Example of two-node failover cluster configuration: Two servers, each with two HBAs and four storage enclosures


The two-node cluster depicted in the figure is an excellent example of the recommended starting configuration. It supports three-way mirroring and enclosure awareness, helping to simultaneously protect against an enclosure failure and two disk failures across different enclosures.

By using Cluster Shared Volumes (CSVs), administrators can unify storage access into a single namespace for ease of management. All Hyper-V cluster nodes can access a CSV at the same time for direct I/O. This unified namespace allows highly available physical or virtual workloads to transparently fail over to another server if a server failure occurs.

Another Windows Server 2012 R2 capability enabled by storage spaces is automated data tiering. The most frequently accessed data is automatically prioritized to be placed on high-performance 12 Gbps SAS SSDs, and the less frequently accessed data on high-capacity, lower-performance hard disk drives (HDDs). The data is periodically moved to its appropriate location with minimal impact to the running workload.

Storage Spaces includes the ability to automatically rebuild storage spaces using free space in a storage pool. If a physical disk fails, Storage Spaces regenerates the data that belongs to the failed physical disk without any user intervention.

Finally, Storage Spaces can use existing SSDs in the storage pool to create a write-back cache that is tolerant of power failures and buffers small random writes to SSDs before later writing them to HDDs. Small random writes often dominate common enterprise workloads, and they can impact the performance of other data transfers that are taking place. By using SSDs — which excel at random access — for a write-back cache, Storage Spaces helps reduce the latency of the random writes and the performance impact of other data transfers.

Versatile storage expansion

The right storage approach — traditional hardware-based RAID or SDS through Storage Spaces — depends on an organization’s budget and fault tolerance. The intelligence of the PERC9 H830 adapter helps increase reliability and fault tolerance, and the 12 Gbps SAS HBA is well suited for an SDS implementation. Along with 13th-generation PowerEdge servers and MD1400 or MD1420 DAS enclosures, either configuration makes an exceptional building block for the storage infrastructure projects of small and midsize organizations, as well as remote and branch offices of large enterprises.

Learn More

Dell Storage DAS

Using Schedule Repository Search Option to Update the OpenManage Essentials (OME) Repositories - Dell Repository Manager (DRM)

$
0
0

We have realized that OME and DRM are great products of Dell Systems Management product lines. They play crucial roles for Data Center management, however they act remarkably well when they are integrated together.

We already have numerous information on OME and DRM integration. Let’s see, what’s DRM is offering with its latest release.

DRM acts as a searchable interface which is used to create custom software collections, known as bundles. These bundles are made off with Dell Update Packages (DUPs). OME is an easy to install, intuitive and user-friendly systems management console solution that optimize monitoring and managing Dell enterprise and hardware.

Creating and Updating OME Repository with latest update:

DRM is a Windows-based application that assists IT Administrators to easily create Repositories of Windows and Linux updates, quickly identify new updates, and package them into a format for deployment.

DRM can be integrated with several Dell provided tools like OME to create and update customized Repository.  After a Repository is established, DRM has the ability to identify when newer updates are available and update the repository accordingly using Schedule Repository Search option.

The purpose of this video is to guide the customers who are using OpenManage Essentials (OME), and want to update the OME specific Repository with the latest drivers, BIOS, firmware, and applications. Here, in this video, we have talked about how you can use DRM Schedule repository search to update the OME Repository with latest update.

(Please visit the site to view this video)

You can either select Latest updates for all devices or Latest updates for only out-of date devices. You can actually schedule Schedule repository search daily, weekly, or monthly as per your convenience.

Conclusion:

Here, we have shown the integration of DRM with OME using DRM Schedule repository search Option. It’s pretty much visible in this video, this attribute of DRM makes IT administrators’ life much easier and save their time.

ProSupport Plus for PCs and tablets: Bringing the “Wow” factor to PC and tablet support

$
0
0

By Charles Cook, (Product Manager, Global Services)

When I was an IT program manager, in a previous life, there was always tremendous competition to work on IT projects that created new capabilities. The company wanted IT to focus on projects that would help grow the business, but we also had the critical responsibility of maintaining a high level of workstation and infrastructure support. The problem is it was almost impossible to try and find time to do both.

Today, as a Services Product Manager at Dell, I continue to hear that solving this problem is a top priority for our customers. In fact, the challenge has become even greater as IT shops deal with the increased complexity that comes with a proliferation of personal devices in the workplace and the need for mobility. IDC reports that IT continues to spend 80% of their time, on routine operations and support, leaving only 20% for innovation and strategic projects. To help shift this trend, our Services team has launched a new offer that dramatically reduces support time and costs, while freeing up time for IT to focus on solving business problems.

ProSupport Plus for PCs and tablets is the only truly complete support service for PCs and tablets. The service includes 24/7 priority access to our highly rated ProSupport engineers for hardware and software support, plus industry-first proactive and predictive technology. Our SupportAssist technology proactively monitors your PCs and tablets and notifies Dell when there is a hardware or software issue. If an issue is detected, the software automatically creates a case, gathers the relevant device information, checks for entitlement and phones home to Dell tech support. Our Dell ProSupport engineers then proactively contact you to quickly resolve the issue. This process represents a new paradigm in device support, which until now, has been a completely reactive process.

Dell’s proactive support model is fast, automated and dramatically reduces IT effort to resolve issues. In independent testing, it was found that resolving an issue with ProSupport Plus with SupportAssist resulted in up to 58% fewer steps and up to 84% less time on the phone with tech support versus HP and Lenovo.* ProSupport Plus with SupportAssist can also prevent downtime by predicting potential issues before they become a problem. Imagine a situation where your hard drive is about to fail, but you are warned first by Dell so you have time to back up your data. A new hard drive arrives in the mail the next business day, you restore your data, and are quickly back up and running—averting a major problem. That is the type of “wow” customer experience that we will be delivering with ProSupport Plus and you can only get it from Dell.

By now, I hope you are as excited about this new offer as we are. But it gets even better because we have also included two other essential services—accidental damage repair and hard drive retention. We listened to you, and these two services are by far the most highly valued additional support services. Having coverage to protect your devices from drops, spills and surges and the ability to keep the old hard drive post-replacement, provides investment protection and gives you control over your data. For customers with more than 1000 ProSupport Plus systems, Dell also provides a dedicated Technical Account Manager for escalation management and service history reporting. We bundled these essential services to ensure that ProSupport Plus is the most complete offer for PCs and tablets. Learn more about how your organization can benefit from Dell ProSupport Plus at www.Dell.com/ProSupportPlus

*Based on Nov 2014 Principled Technologies Test Report commissioned by Dell. Actual results will vary. Full report. SupportAssist not available on Venue 7 and 8 tablets.

Viewing all 1001 articles
Browse latest View live


Latest Images