Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

The Linux DCB stack – A brief overview

$
0
0

Blog post written by Shyam Iyer and Ahmad Ali.

The present Linux DCB stack is a result of various vendors independently pushing code into the Linux kernel. This code is aligned with each vendor’s hardware strategy and submitted to separate subsystems under different maintainers. The maintainers weren’t necessarily coordinating these changes and the uncoordinated and parallel development efforts have resulted in the differences in user-space interfaces and management tools that we see today. There are pro’s and con’s for each implementation. Albeit, there is a need to standardize management interfaces in the Linux DCB stack so that system administrators manage only common tools and interfaces across vendors for basic tasks.

One vendor camp requires a netdev interface to push/get configurations or retrieve statistics. They rely on Open-LLDP based open source tools to provide the LLDP engine. Initially, this approach was used for software FCoE initiators only, and no integration was done with Open-iSCSIbased tools for software iSCSI initiators. Over time this approach was corrected by allowing DCB configurations for iSCSI through Open-LLDP/dcbtool and support for iSCSI App TLVs in these tools. Additionally, this approach has also been integrated with system wide QoS solutions like cgroup. The specific tool that helps achieve this is cgdcbxd.

Other vendor camp requires the vendor’s specific tools to manage DCB on Linux. These are mainly the converged network adapter (CNA) vendors and traditional Fibre Channel HBA vendors. To its credit, the original customers for such adapters wanted the same look and feel of a Fibre Channel HBA when migrating to Ethernet based FCoE HBAs. However, convergence also created different user-kernel interfaces for the converged functions in the same adapter. For instance, a typical CNA vendor provides a SCSI HBA interface for its FCoE and iSCSI functions and a netdev interface for its NIC function. This means that an Open-LLDP based DCB management using a netdev interface is not sufficient for CNAs. The DCB queue statistics observed from a NIC interface (netdev) may not correlate with statistics pertaining to the iSCSI function or the FCoE function of the CNA.

The netdev camp contends that DCB is a layer-2 Ethernet function and a netdev interface best describes the management index to probe or configure an Ethernet based adapter. But the reality is, the Ethernet is just a medium to transport Fibre Channel, iSCSI, or any other storage/interconnect protocol. In a similar development of a tectonic shift in the industry, Open-iSCSI tools have already started integrating control and management aspects of iSCSI offload HBAs in addition to software iSCSI initiators. This has made iSCSI configuration and management simpler and easy to use. Also users don’t have to relearn new vendor specific tools. Similarly, Linux DCB management could benefit by allowing a common user-space tool to do simple DCB configurations to coordinate with switch settings and provide basic statistics.

For NIC based solutions that lack an LLDP engine in the firmware, Open-LLDP provides that engine in the host OS. Any NIC that provides DCB primitives in the hardware could benefit from Open-LLDP’s LLDP state machine and turn into a fully capable DCB adapter. This has helped vendors’ jumpstart efforts to offer DCB based adapters and has been standardized as a common interface for many traditional NIC vendors. Additionally, running an LLDP engine in the Linux user-space architecturally benefits KVM based virtualization solutions to benefit from querying DCB configurations and match the virtual machine QoS SLAs to the hardware based DCB QoS configurations.

The DCB management tools in the Host should be common for both NIC-only DCB adapters, and CNAs. One approach towards this is to retain the DCB stack in the OS but lessen its netdev dependency. A DCB device class could be a netdev device or a SCSI device and should support the same or similar API to query or configure the DCB configurations. This architecture visualizes a core DCB kernel library with callbacks to both NIC drivers and HBA drivers. The core DCB kernel library should be manageable using dcbnl sockets via Open-LLDP thus standardizing the Kernel ABI to manage DCB for all adapters. DCB should also integrate seamlessly with the requirements of a virtual machine’s QoS SLAs and provide hardware based lossless fabric and other QoS features like bandwidth guarantees per VM. This can be achieved only if the management interfaces are common and open across all the vendors. In conclusion, standardizing on Open-LLDP, dcbtool, and other host-based tools for NICs and CNAs will make Linux DCB management much simpler and easier to manage for system administrators. DCB as a technology has many desirable features for Network, Storage and Virtual Server Administrators.


Comparison between NPAR and SR-IOV in VMware ESXi

$
0
0

What is NPAR?

NPAR (NIC Partitioning) provides the capability to create multiple native Ethernet interfaces that share a single physical port.

What is SR-IOV?

SR-IOV (Single Root – IO Virtualization) is a standard that can present single PCIe device (this is called Physical Function) as multiple independent PCIe devices (each one is called Virtual Function) to Operating Systems and hypervisors.

NPAR and SR-IOV are the technologies that provide IO virtualization capabilities, however they do so in different ways.

This blog captures high level capabilities of NPAR and SR-IOV in VMware ESXi .

NPAR

SR-IOV

NPAR is specific to a Server OEM.

SR-IOV is a PCI SIG standard.

Implemented at hardware layer.

SR-PCIM (Single Root – PCI Manager) has to be implemented at Hypervisor level.

On a dual port adapter, each physical port is partitioned into 4 physical functions and each of the 4 partitions is an actual PCI Express function.

On a dual port adapter, each physical port is further partitioned into multiple virtual functions. We can create maximum of 641 virtual functions per port.

On a dual port adapter, if NPAR enabled on port 0 the same setting is inherited to port 1 as well.

On a dual port adapter, We have to enable the virtual functions on each port based on the requirement.

NIC Partitions are separate physical functions and same NIC driver exposes all the physical functions.

Virtual functions are associated with a physical function and it requires Virtual Function (VF) driver at Guest OS level.

We can allocate flexible bandwidth to each physical function.

A virtual function doesn’t have the capability to allocate bandwidth.

1. SR-IOV supports up to 43 virtual functions on supported Intel NICs and up to 64 virtual functions on supported Emulex NICs. The actual number of virtual function available for pass-through depends on number of interrupts vectors required by each of them.

NOTE: Please consider networking configuration maximums, when enabling NPAR or SR-IOV in VMware vSphere environment.

http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

Other references:

http://en.community.dell.com/techcenter/b/techcenter/archive/2012/10/26/sr-iov-and-vmware-esxi.aspx

http://en.community.dell.com/techcenter/b/techcenter/archive/2012/08/21/some-thoughts-about-nic-partitioning-npar.aspx

http://en.community.dell.com/techcenter/b/techcenter/archive/2013/07/02/nic-partitioning-in-vmware-esxi.aspx

DELL Server DRAC Card Soft Reset With Racadmin

$
0
0

The following blog post is from Didier Van Hoye, a Technical Architect, Dell TechCenter Rockstar and avid blogger.

Sometimes a DRAC goes BOINK

Sometimes a DRAC (Dell Remote Access Card) can give you issues. Sometimes it’s some lingering process or another hiccup that causes this. You can try a reboot but that doesn’t always fix the issue. You can go into the BIOS and cancel any running System Services. A “confused” DRAC card can also be fixed by shutting down the server and cutting power for 5 to 10 minutes. That’s good to know as a last resort but not very feasible a lot of times, bar a maintenance window when you’re on premise.

You can also try to do a local or a remote reset of the DRAC card via OpenManage  (OMSA), racadmin. See RACADM Command Line Interface for DRAC for more information on how and when to use this tool. The racadmin can be used for a lot of remote configuration and administration and one of those is a “soft reset” or basically a powercycle, aka reboot, of the drac card itself. Don’t worry your server stays up Smile.

Local: racadmin racreset soft

Remote: racadm -r <ip address> -u <username> -p <password> racreset soft

Real life example

I was doing routine maintenance on 4 Hyper-V clusters and as part of that DUPs (Dell update packages) were being deployed to upgrade some firmware. This can be automated nicely via Cluster Aware Updating and the logging option will help you pin point the issue. See http://workinghardinit.wordpress.com/2013/01/09/logging-cluster-aware-updating-hotfix-plug-in-installations-to-a-file-share/ for more information on this.

Just like we found that the DRAC upgrade was not succeeding on two nodes.

One it was due to the DUP not being able to access the Virtual USB Device

Software application name: iDRAC6
   Package version: 1.95
   Installed version: 1.92

Executing update…

Device does not impact TPM measurements.

Device: iDRAC6, Application: iDRAC6
  Failed to access Virtual USB Device

==================> Update Result <==================

Update was not applied

================================================

Exit code = 1 (Failure)

and the other was because there was some other lingering DRAC process.

 iDRAC is currently unable to process this request because of another task.
  Please attempt one or more of the following steps to cancel the pending iDRAC task:
  1) Wait 30 minutes and retry your request.
  2) Reboot the system; Press F10; select ‘Exit and Reboot’ from Unified Server Configurator, and retry your request.
  3) Reboot the system; Press Ctrl-E; select ‘System Services’. Then change ‘Cancel System Services’ to YES, which will close the pending task;
      Then press Enter at the warning message. Press ESC twice and select ‘Save Changes and Exit’ and retry your request.

==================> Update Result<==================

Update was not applied

================================================
Exit code = 1 (Failure)

They give some nice suggestions but the racreset is another nice one to have I your toolkit. It’s fast and effective.

Run racadmin racreset soft

image

Wait for a couple of minutes and then run the DUP or the items in SUU that failed. With some luck this will succeed now.

image

Follow-up: Ubuntu on the Precision M3800

$
0
0

In my previous post about the Precision M3800, I noted that there are still a couple of pieces that do not fully work on Linux. Those were the touchscreen, which requires a "quirk" to fix, and the SD card reader, which did not work. I have some updates on those.

Kent Baxley of Canonical and I are working on getting the quirks for the touchscreen included in the official Ubuntu and the upstream kernels. Until then, you can still use the packages available at <http://linux.dell.com/files/ubuntu/contributions/>.

As far as the SD card reader, there is a fix in the upstream 3.13-rc1 kernel's rts5249 driver. We have worked with Canonical so that this fix is expected to be in an upcoming Stable Release Update (SRU) kernel for Ubuntu 13.10. For users of other Linux distributions, I am working with the patch's author to try to get this patch into older Linux stable kernels.

Building a Private Cloud File Solution Using Owncloud

$
0
0

Editor’s Note:

Dell Solution Centers in conjunction with Intel have established a Cloud and Big Data program to deliver briefings, workshops and Proofs of Concept focused on hyper-scale programs such as OpenStack and Hadoop.  The program’s Modular Data Center contains over 400 servers to support hyper-scale capability to allow customers to test drive their solutions. 

In this blog series, Cloud Solution Architect Kris Applegate discusses some of the technologies he is exploring as part of this program – and shares some really useful tips! You and your customer can learn more about these solutions at one of our global Solution Centers; all have access to the Modular Data Center capability and we can engage remotely with customers who cannot travel to us.

*******************************************

One of the most popular Software-as-a-Service offerings users associate with modern cloud computing is a file share / backup. Having a location where you can easily store, share, and backup your files across multiple computers is an attractive option for private cloud customers also. One solution to this is the open source software  Owncloud (http://owncloud.org). This software allows you to run a private instance on Windows or Linux and share not only files, but photos, music, contacts, and calendars. Moreover, you can attach it to either local storage, network storage, or even other cloud storage providers.

A couple possible use cases could be:

 Remote Office / Branch Office File Share: Using VMware or Microsoft Hyper-V, you could run this in a VM attached a VRTX and provide users with an easy-to-use file sharing solution.

Large Enterprise Private File Share:  Attaching this to a large scale-out file storage solution such as Openstack Swift would enable the handling of thousand or even hundreds of thousands of user’s files on a robust elastically scalable storage infrastructure. 

Users can access this cloud by means of the web interface, client software (Windows, Mac, Linux), or mobile applications (Apple/Android). They can send links to other Owncloud users or even to any public internet user. These links can be password protected and even have an expiration date.

Owncloud also has an open architecture that allows users to write plugins to extend the funcationlity of the platform. Some popular ones enable Anti-virus, encryption, URL shortening, and LDAP/AD integration.

Today’s users are already familiar with similar tools as these in their personal lives, so delivering them something just as intuitive and simple can go a long way in making them productive faster.  If you would like to discuss this use-case or any other Dell solution, talk to your account team about how a Solution Center engagement might help you.

 

Hotpluggable PCIe SSD support from VMware ESXi 5.5

$
0
0

This blog post was written by Avinash Bendigeri and Krishnaprasad K from Dell Hypervisor Engineering team

VMware announced hot pluggable PCIe SSD storage support from ESXi 5.5 onwards. Dell announced the release of PCIe SSD flash device support from PowerEdge 12th generation server release onwards. This blog focuses on the list of PowerEdge servers supporting  PCIe SSD and PCIe SSD flash device use cases in VMware ESXi environment.

From VMware ESXi 5.5 onwards, the micron driver (mtip32xx_native) is used to initialize PCIe SSD flash device and is available as an inbox driver. VMware ESXi 5.1.x branch does not include this driver in ESXi by default. However the Dell Customized VMware ESXi 5.1.x images are bundled with this driver. Refer to "ESXi 5.x Image Customization Information" guide available at Dell VMware ESXi Documentation site to know the version details of micron driver integrated in VMware ESXi 5.1.x branches.

NOTE: Even though PCIe SSD is supported in ESXi 5.1.x branch, hot swapping of PCIe SSD is not supported. This feature is available from ESXi 5.5 onwards only. 

The user guide for PCIe SSD flash device is available under "User's Guide" section at  Dell support site. This guide talks about the PCIe SSD architecture, setting up PCIe SSD on Dell PowerEdge Server(s), Managing PCIe SSD, Driver installation for different supported operating systems etc.  The Dell PowerEdge Express Flash PCIe SSD spec sheet can be found here.

Dell PowerEdge server(s) supporting PCIe SSD flash device
                Below are the PowerEdge servers which are supported for express flash PCIe SSD. 

  •  PE R820
  •  PE M620
  •  PE R720
  •  PE M820
  •  PE R620
  •  PE T620

The use cases of PCIe SSD flash devices in VMware ESXi

           There are three use cases available for PCIe SSD devices in VMware ESXi.

1. PCIe SSD storage as a VMFS datastore

The VMware Virtual Machine FileSystem (VMFS) is an automated file system that simplifies storage management for virtual machines. Adding PCIe SSD storage as a VMFS datastore is same as how other storage devices are being used. The detailed steps to add a storage device as a VMFS datastore is explained here. This VMFS datastore can be used for deploying virtual machines.

2. PCIe SSD storage for swapping to Host Cache

The VMFS datastore created out of PCIe SSD flash device storage can be used for allocating space for host cache. Read VMware documentation to understand about swapping to host cache and its configuration.

3. PCIe SSD storage for virtual Flash (vFlash) feature

Virtual flash is a new feature introduced from VMware ESXi 5.5.  It allows accelerating virtual machine performance through the use of local SSD disks. Refer to VMware KB to know more about virtual flash. Refer to VMware KB to understand configuring vFlash using SSD device(s) which can include PCIe SSD as well.

Dell Management Packs now support Microsoft System Center 2012 R2!

$
0
0

You can now use the Dell OpenManage Integration Suite for Microsoft System Center with Microsoft System Center 2012 R2 Operations Manager.  We have qualified all the Management Pack suites listed below on the latest version of OpsMgr and updated the documentation for R2 support.

• Dell Server Management Pack Suite Discover, inventory and monitor Dell PowerEdge servers (agent-based option with Windows/OpenManage Server Administrator and agent-free option using WSMAN for 12th Generation of Dell PowerEdge Servers), Chassis Management Controllers using SNMP &  iDRACs using SNMP.   Download: version 5.1 , Documentation

• Dell Client Management Pack –Discover, inventory and monitor Dell Client PCs running Windows and OpenManage Client Instrumentation (OMCI) Download: Dell Client Management Pack version 5.0 , Documentation

• Dell Printer Management Pack –Discover, inventory and monitor Dell Printers using SNMP Download version 5.0, Documentation

•Dell MDStorage Array Management Pack Suite –Discover, inventory and monitor Dell PowerVault MD Storage arrays Download version 5.0, Documentation

• Dell EqualLogic Management Pack Suite–Inventory and monitor Dell EqualLogic storage arrays using SNMP Download version 5.0, Documentation

The latest releases of all the listed Dell Management Packs will work as-is for System Center 2012 R2 Operations Manager; the only exception is that the Chassis Modular Server Correlation feature of the Server MP Suite is not supported on R2 (110032 in the online Release Notes).

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #56

$
0
0

"The Elway Gazette"

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (flo@datacenter-flo.de) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

Featured Posts of the Week!

ESX the PowerShell Way with Get-ESXCli by Jeffery Hicks

Cmdlet Reference Download for Windows Azure Pack for Windows Server #WAP #WindowsAzure #SCVMM by James van den Berg

MP Control Manager not responding by Damian Flynn

WS2012 R2 Hyper-V Manager Can Be Used With WS2012 Hyper-V by Aidan Finn

Azure 

Cmdlet Reference Download for Windows Azure Pack for Windows Server #WAP #WindowsAzure #SCVMM by James van den Berg

Events

Das war die TechNet Conference 2013 in German by Toni Pohl

Hyper-V

Hyper-V VM-Ressourcen priorisieren in German by Benedict Berger

Alles, was ihr schon immer über Generation-2-VMs in Hyper-V 2012 R2 wissen wolltet, aber euch nicht zu fragen trautet in German by Nils Kaczenski

Introducing Hyper-V Server 2012 R2 by Aidan Finn

WS2012 R2 Hyper-V Manager Can Be Used With WS2012 Hyper-V by Aidan Finn

Removing the ghost Hyper-V vNic adapter when using Converged Networks after in-place upgrade to W2012R2 by Alessandro Cardoso

Office 365

CHANGE OFFICE 365 PASSWORD EXPIRATION POLICY by Thomas Maurer

PowerShell

ESX the PowerShell Way with Get-ESXCli by Jeffery Hicks

Managing VMware Tools with PowerCLI by Jeffery Hicks

Sharepoint 

Two new SharePoint 2013 Workflow Articles on MSDN - Debugging & Workflow CSOM by Andrew Connell

Inspecting the SharePoint OAuth Token with a Fiddler Extension by Andrew Connell

System Center Core

Technical Documentation Download for System Center 2012 #sysctr #Cloud #SCVMM by James van den Berg

System Center Configuration Manager

MP Control Manager not responding by Damian Flynn

System Center Service Manager

Service Manager 2012 R2: Incident Management by Damian Flynn

System Center Virtual Machine Manager

TECHDAYS 2013 – FABRIC MANAGEMENT WITH VIRTUAL MACHINE MANAGER SESSION ONLINE by Thomas Maurer

Migrating to System Center Virtual Machine Manager 2012 R2 by Damian Flynn

Windows Server

In the Spotlight => Hybrid #Cloud with NVGRE (WS SC 2012 R2) #SCVMM #Hyperv by James van den Berg

Tools

MAP Toolkit 9.0 Beta is now available by Robert Smit

DELL Server DRAC Card Soft Reset With Racadmin by Didier van Hoye


Setting iDRAC OS Information with IPMI on Ubuntu Server

$
0
0


A few weeks ago the Dell Linux Engineering team published a TechCenter article on how to set and retrieve Linux operating system information in iDRAC using the latest ipmitool utility in Fedora 18 and later releases.

Kent Baxley, Canonical Field Engineer, has ported this functionality to Ubuntu Server 12.04 LTS (and higher) so that customers running Ubuntu Server can take advantage of this useful systems management feature.

To read the easy-to-follow set of instructions on Ubuntu Server, click here.

Hadoop Performance Optimization: Join our Chat on Tuesday, Dec 3 and Webinar on Wednesday, Dec 4

$
0
0

Please join our Chat and Webinar on:

Hadoop Performance, Tuning, and Optimization
Intel Apache Hadoop Running VMware Big Data Extensions

with Armando Acosta, Big Data Product Manager at Dell

Chat: Tuesday, Dec 3 at 3.00 pm - 4.00 pm CST (UTC-6 hours)
Our chat will be an informal engineer to engineer Q&A session.
Click here to join: https://delltechcenter.adobeconnect.com/chat

Webinar: Wednesday, Dec 4 at 3.00 pm - 4.00 pm CST (UTC-6 hours)
Our webinar will be a more formal (structured) event with a presentation and a Q& session at its end.
Click here to join: https://delltechcenter.adobeconnect.com/chat

Agenda

Both sessions will cover the same topics, but they are different in style & nature - think of our chat as of a live jam session, whereas the webinar compares to a classic concert in the opera:

  • Dell Involvement  in Hadoop
  • Hadoop Benchmarking- Dell PowerEdge R720/R720XD Performance Optimization and Tuning
  • Intel Apache Hadoop running VmWare Big Data Extensions PowerEdge R720XD

If you want to learn more, please go to our white paper landing page.

Implementing Compellent and EqualLogic Storage as Pooled Resources in the Hyper-V based Dell Active Infrastructures

$
0
0

As an Infrastructure as a Service (IaaS) solution, Dell Active Infrastructure is a family of converged infrastructure solutions that combine servers, storage, networking, and infrastructure management into an integrated and optimized system that provides general purpose virtualized resource pools. Active Infrastructure helps organizations quickly develop and implement the pre-integrated private cloud infrastructure while reducing complexity and risk. Dell Active Infrastructure family is offered in configurations with either VMware® vSphere™ or Microsoft® Windows Server® 2012 with Hyper-V® role enabled Hypervisors. Below is the list of the Hyper-V based Private Cloud on the family of Dell Active Infrastructure:

To implement a private cloud, resource optimization is one of the key design principles, which drives efficiency and cost reduction, and it is achieved primarily through resource pooling. Resource pooling abstracts the platform from the physical infrastructure. It enables the optimization of resources through shared use. Multiple consumers sharing resources results in higher resource utilization, and this leads to the more efficient and effective use of the infrastructure. This optimization can ultimately help drive down costs and improve agility. 

In the Dell Active Infrastructure family, different Dell Storage solutions can be used upon the organization needs to drive their business agility. Active System 1000 is powered by Dell Compellent SC8000 Storage Center for Fiber Channel and iSCSI (with an additional SC8100 Storage Center for Fiber Channel). Active System 200/800 is powered by Dell EqualLogic PS6100/PS6110/PS6510 iSCSI storage.  In the Hyper-V based private cloud solutions, Microsoft System Center Virtual Machine Manager (VMM) is used to discover, classify, and provision the supported Compellent and EqualLogic storage arrays. VMM automates storage assignment to a Hyper-V host or host cluster and tracks the storage.

In order to activate this storage feature, VMM uses Windows Storage Management API (SMAPI) to manage the external storage using either Storage Managemetnt Provider (SMP), or uses SMAPI together with the Microsoft standards-based storage management service to communicate with Storage Management Initiative – Specification (SMI-S) compliant storage. To be the storage subsystem as pooled resources in a private cloud, Compellent SC8000 uses SMS-S provider while EqualLogic PS storage arrays use SMP.

In Active System 1000, Compellent SMI-S Provider version 1.5 is bundled with the Enterprise Manager Data Collector version 6.3. SMI-S can be configured during initial Data Collector installation or post-installation by modifying the Data Collector Manager properties. For more information about the Dell Compellent SMI-S 1.5 provider, please refer to the Dell Compellent Enterprise Manager 6.3 User Guide found on the Dell Storage Knowledge Center.

Figure 1 Compellent Integration with VMM in Active System 1000

 In Active System 200/800, Dell EqualLogic HIT v 4.6 is implemented to integrate the storage management on EqualLogic PS storage arrays in VMM via SMP.  With integration to VMM, you can manage Dell EqualLogic storage directly through native Windows storage interfaces such as storage PowerShell cmdlets (Storage Module), the File Services UI in the Windows 2012 Server Manager console, or the standard Windows Management Instrumentation (WMI). Figure 2 shown above depicts the overview of storage management integration of EqualLogic in VMM. For more information on HIT/Microsoft 4.6, refer to the EqualLogic Software page.

Figure 2 EqualLogic Integration with VMM in Active System 200/800

For more details on Dell Active Infrastructures offerings and the comparison, visit the Dell Converged Infrastructure web site

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #57

$
0
0

"The Elway Gazette"

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (flo@datacenter-flo.de) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

Featured Posts of the Week!

Live Migration Can Benefit From Jumbo Frames by Didier van Hoye

Upgrade to Exchange Server 2013 RTM CU3 by Jaap Wesselius

Troubleshooting federated sharing – part III by Johan Veldhuis

Options for Upgrading Standalone Hosts to Windows Server 2012 R2 Hyper-V by Aidan Finn

Azure 

[MSCloudShow] Episode 9 is live - Managing Azure Virtual Machines and Listener Questions by Andrew Connell

Exchange

Die Outlook Web App (OWA) für iPhone und iPad in German by Kerstin Rachfahl

MVP ShowCast 2013 – Semana de Palestras sobre o Exchange Server 2013 in Portuguese by Marcelo VighiUpgrade to Exchange Server 2013 RTM CU3 by Jaap Wesselius

Hyper-V

Live Migration Can Benefit From Jumbo Frames by Didier van Hoye

Hyper-V Replica Essentials Book Review | Packt Publishing by Robert Smit

Options for Upgrading Standalone Hosts to Windows Server 2012 R2 Hyper-V by Aidan Finn

Checkpoint Support on Active Directory Virtual Machine by Lai Yoong Seng

Automatic Virtual Machine Activation Key (AVMA) by Lai Yoong Seng

Lync Server

Polycom CX600 fail to login after Certificate renewal on Lync 2013 Edge Server by Jaap Wesselius

Office 365

SharePoint Apps mit Office 365 und NAPA entwickeln in German by Toni Pohl

PowerShell

Remove white lines in file through PowerShell by Jeff Wouters

Managing VMware with CIM and PowerShell by Jeffery Hicks

Identifying Active Directory built-in groups with PowerShell by 

#PSTip Convert a path to WMI compatible path by Ravikanth Chaganti

Modifying a VM with PowerCLI by Jeffery Hicks

#PSTip Check if a folder is shared by Ravikanth Chaganti

PowerShell Tools for Visual Studio Thanksgiving Special! by Adam Driscoll

Sharepoint 

Tipp: Zusätzlicher Speicherplatz für SharePoint Online im SMB Plan in German by Kerstin Rachfahl

System Center Dataprotection Manager

DPM 2012 fails to backup SQL 2012 database by Susantha Silva

System Center Virtual Machine Manager

Upgrading to System Center Virtual Machine Manager 2012 R2 by Damian Flynn

QNAP SMI-S Provider für SC2012 SP1/R2 Virtual Machine Manager in German by Daniel Neumann

System Center Virtual Machine Manager 2012 R2: Migrating Hosts and Libraries by Damian Flynn

Windows Client

IE11 for Windows 7 Globally Available – attempt to access invalid address by Robert Smit

Windows Server

SenseConnector: Remote-Manager für Windows-Admins in German by Nils Kaczenski

8.000 Seiten: TechNet-Doku zu Windows Server 2012 und R2 als PDF in German by Nils Kaczenski

Whitepaper – Hybrid Cloud with NVGRE (WSSC 2012 R2) aktualisiert in German by Daniel Neumann

Redirected I/O in Windows Server 2012/R2 Cluster Shared Volumes by Aidan Finn

Updated Whitepaper for Hybrid Cloud with NVGRE (WSSC 2012) by Kristian Nese

Troubleshooting federated sharing – part III by Johan Veldhuis

Tools

IE11: Noch ein Kompatibilitäts-Trick in German by Nils Kaczenski

Upgrading The DELL MD3600F Controller Firmware Using the Modular Disk Storage Manager by Didier van Hoye

SuSE Linux Enterprise Server 11 SP3 is available for Factory Install now!

$
0
0

This blog is written by Kannan K from Hypervisor Engineering and Venkat K from Linux Engineering team.

Dell has announced the availability of SLES 11 SP3 for factory install today.

What’s new?

So far, Dell Customers have had option to select SLES 11 SP2 and prior releases as factory installed in BIOS mode only.

In this option, we can’t install OS with virtual disk size >= 2 TB.

SLES 11 SP3 is the first release in SLES 11 release branch which can be ordered as factory installed in both BIOS and UEFI boot mode.

Dell customers can order Dell PowerEdge Servers with SLES 11 SP3 OS pre-installed from factory.

Benefits:

Factory installed OS has local repository configured to install packages quickly. The below example shows, the repository usage method.


How to buy a Dell PowerEdge Server factory installed with SLES 11 SP3 OS in UEFI boot mode?

  • If you order a system via Dell online, the UEFI boot mode option is not available. Hence factory installation slices the virtual disks into 1.8 TBs in BIOS boot mode.

Contact the Dell Sales Team to order a factory installed SLES 11 SP3 system with UEFI boot mode and virtual disk size >= 2 TB.

Please refer below wiki, to get more information about Dell Support for UEFI on PowerEdge servers.

http://en.community.dell.com/techcenter/virtualization/w/wiki/4866.vmware-esxi-support-for-uefi.aspx

Other references:

http://en.community.dell.com/techcenter/b/techcenter/archive/2013/07/16/suse-enterprise-linux-11-service-pack3-support-on-dell-poweredge-servers.aspx

Dell PowerEdge VRTX Chassis PCIe Management using Windows PowerShell

$
0
0

This blog post was originally written by Syama Poluri and Aditi Satam. Send your suggestions or comments to WinServerBlogs@dell.com.

The Dell VRTX Chassis is a converged infrastructure solution that includes separate compute nodes, network infrastructure, and a shared storage subsystem. This blog post illustrates system management tasks related to the management of the PCIe slots and adapters in the VRTX chassis.

Dell Chassis Management Controller (CMC) helps manage, monitor, configure and deploy different hardware components in Dell PowerEdge VRTX Solution.

VRTX chassis provides fully expandable input-output capability with 5x SFF (Small Form Factor) PCIe slots (Gen 2) each 25W capacity and 3xFHFL (Full Height Full Length) PCIe slots (Gen 2) each 150W capacity. The 3XFHFL slots can be used with Double wide PCIe cards that require 225 W. These can be assigned to the servers in the chassis. A server must be turned off before mapping / unmapping a PCIe device to it. A server may have multiple mapped PCIe Adapters. However, each individual PCIe adapter may only be mapped to one server. A maximum of 2 PCIe adapters can be assigned per server if the VRTX CMC has an Express License and 4 if it has Enterprise license. These operations can be done using the CMC UI or out-of-band from the operating system using PowerShell and WSMAN profiles. The PowerShell examples given in this blog use the Dell Chassis PCI Management WSMAN Profile. The profile description and explanation of different classes and properties is located at http://en.community.dell.com/techcenter/extras/m/white_papers/20399998.aspx

The following PowerShell script creates a CIM session to the CMC of the VRTX Chassis and demonstrates enumeration of the PCI Adapters basic properties, PCI Slot properties. The script demonstrates how using PowerShell you can manage assignment of PCI slots to a blade or slot. Please see the attached script VRTX_PCIeSlotMgmt.ps1

First step is to setup a CIM session to the CMC. 

#Enter the CMC IP Address

$ipaddress=Read-Host -Prompt "Enter the CMC IP Address"

#Enter the Username and password

$username=Read-Host -Prompt "Enter the Username"

$password=Read-Host -Prompt "Enter the Password" -AsSecureString

$credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$password

# Setting the CIM Session Options

$cimop=New-CimSessionOption -SkipCACheck -SkipCNCheck -Encoding Utf8 -UseSsl

$session=New-CimSession -Authentication Basic -Credential $credentials -ComputerName $ipaddress -Port 443 -SessionOption $cimop -OperationTimeoutSec 10000000

echo $session

  

 

You can enumerate the PCI adapter/device and its properties using the DCIM_ChassisPCIDeviceView class.

Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisPCIDeviceView" -Namespace root/dell/cmc 

 

You can enumerate all the PCIe slots and its properties using the DCIM_ChassisPCISlot class. It gives Fully Qualified Device Descriptor (FQDD) of the assigned blade and slot number. For more information, please refer to Section 7.2.3 of the Dell Chassis PCI Management Profile.

Note: Slot 9 and 10 are reserved for the SPERC8 RAID Controller.

Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisPCISlot" -Namespace root/dell/cmc 

 

The following command illustrates how you can query DCIM_ChassisPCISlot to get slot information where assigned blade is ‘Blade 4’.

Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisPCISlot" -Namespace root/dell/cmc -QueryDialect "http://schemas.microsoft.com/wbem/wsman/1/WQL" -Query "select * from DCIM_ChassisPCISlot WHERE AssignedBladeFQDD='System.Modular.04'"


 

The AssignPCISlotstoBlades( ) method is used to assign the modular chassis PCI slots to blade servers. The PCIe slots referenced by the SlotFQDDs parameter will be assigned the blades in the BladeFQDDs parameter. In this example, Slot 1 is assigned to Blade 4, Slot 2 to Blade 2 and Slot 3 is assigned to Blade 2 as well.

##Assign PCIe slots to blades

$PCIServiceobj=Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisPCIService" -Namespace root/dell/cmc -Filter 'CreationClassName="DCIM_ChassisPCIService" ; SystemCreationClassName="Dell_ChassisMgr" ; SystemName="srv:system" ; Name="DCIM:ChassisPCIService"'

$slotfqdd=@("PCIE.ChassisSlot.1"; "PCIE.ChassisSlot.2";"PCIE.ChassisSlot.3")

$bladeslotfqdd=@("System.Modular.04";"System.Modular.02";"System.Modular.02")

$assignmethodobj=Invoke-CimMethod -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisPCIService" -InputObject $PCIServiceobj -CimSession $session -MethodName AssignPCISlotstoBlades -Arguments @{SlotFQDDs=$slotfqdd;BladeSlotFQDDs=$bladeslotfqdd}

$assignmethodobj

 

The UnassignPCISlotsfromBlades( ) method is used to unassign the modular chassis PCIe slots from blade servers. PCIe slots referenced by the SlotFQDDs parameter will be unassigned from the blades in the BladeFQDDs parameter.

## Unassign PCIe slots from blades

$PCIServiceobj=Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisPCIService" -Namespace root/dell/cmc -Filter 'CreationClassName="DCIM_ChassisPCIService" ; SystemCreationClassName="Dell_ChassisMgr" ; SystemName="srv:system" ; Name="DCIM:ChassisPCIService"'

$slotfqdd=@("PCIE.ChassisSlot.1"; "PCIE.ChassisSlot.2";"PCIE.ChassisSlot.3")

$bladeslotfqdd=@("System.Modular.04";"System.Modular.02";"System.Modular.02")

$unassignmethodobj=Invoke-CimMethod -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisPCIService" -InputObject $PCIServiceobj -CimSession $session -MethodName UnassignPCISlotsfromBlades -Arguments @{SlotFQDDs=$slotfqdd;BladeSlotFQDDs=$bladeslotfqdd}

$unassignmethodobj

 

 

 Additional Resources:

Managing Dell PowerEdge Servers with Windows PowerShell and Dell iDRAC

For more related articles visit Managing Dell PowerEdge VRTX using Windows PowerShell

How to Extend C Drive (OS partition) in Dell Factory Installed Microsoft Windows Server OS?

$
0
0

When you order a Microsoft Windows Server from Dell with a Microsoft Server OS factory installed in it, the order page gives you the option to select from three partitioning options as 40GB, 80GB and Max partition (This will create OS partition of full disk). If you don’t choose any of these options, you will receive a server with 40GB OS partition which is a default size of factory installed system.

Some customers may need to increase this default size, as their business needs change. In this blog I have discussed different options you can exercise to extend your OS partition size.

Fig: 1

Note: Throughout the below discussion, OS partition or OS volume is C:\ and Data partition or Data Volume is D:\ in Disk 0 similar to the figure 1.

Limitation 1: You cannot extend OS partition if there is no contiguous unallocated space on the right side of OS partition. In such case, the "Extend Volume" option will be greyed out in the disk management console.

Limitation 2: You cannot extend OS partition by shrinking data partition behind it. Because "Shrink Volume" feature only generates unallocated space on the right side of the partition you shrink.

Fig: 2

Option 1: When you have not started using the Data partition adjacent to the OS partition

  1. From Command Prompt, type diskmgmt.msc to open Disk Management console.
  2. Delete the Data partition to get unallocated space similar to the figure 2 in Disk 0.
    1. Right click D drive and select "Delete Volume". If the Data partition is part of extended partition, then right click on the partition and select “Delete Partition”
  3. Now right click on OS partition and you will see “Extend Volume” option is enabled.
  4. Click Extend Volume to increase the OS partition size of your wish.
  5. Create a Data partition using the remaining unallocated space.

Option 2: When you have already started using the Data partition adjacent to the OS partition

  1. Take backup of your Data partition to a Removable disk or to another partition in different disk other than disk 0. You can do it simply by copying the files to another location or using Windows Server Backup tool or third party backup tool.
  2. From Command Prompt, type diskmgmt.msc to open Disk Management console.
  3. Delete the Data partition to get unallocated space similar to the figure 2 in Disk 0
    1. Right click D drive and select "Delete Volume". If the Data partition is part of extended partition, then right click on the partition and select “Delete Partition”
  4. Now right click on OS partition and you will see “Extend Volume” option is enabled.
  5. Click Extend Volume to increase the OS partition size of your wish.
  6. Create a Data partition using the remaining unallocated space and copy back all the content.

Option 3: Third party disk partitioning tool.

There are different disk partition tools available in market to resize the partitions in the system. You need to carefully select the tools based on your requirement. Here are couple of points to consider while choosing the tool:

  1. Will the tool supports MBR boot disk or uEFI boot disk or both?
  2. Factory systems already come with three primary partitions in case of MBR boot (Legacy BIOS) and 4 Primary partition in case of GPT boot (uEFI BIOS). So, disk partitioning tools may not able to work properly in MBR boot disk as they can’t resize the Data partition which is part of the extended partition. In such case, you need to consider option 1 or 2 based on your requirement.
  3. Even though third party tools can resize the partition without a need for taking a backup, it is always recommended to take a backup to avoid any data loss.

Hadoop Performance Optimization on Dell PowerEdge Servers

$
0
0

Dell PowerEdge customers wanting detailed Hadoop tuning information, look no further! TheTuning Hadoop on Dell PowerEdge Server whitepaper has just been released, and contains a wealth of information to allow the user to optimize their Hadoop workloads. Included is a guide for optimization, and the performance effects of various hardware, firmware and Hadoop configurations options are discussed and charted in great detail.

The Four primary areas covered in detail are:

  • Hardware Selection
  • BIOS and Firmware options
  • OS and Filesystem options
  • Hadoop Configuration options

Key findings in this whitepaper are the following:

  • Compression increases performance by up to 20 percent under certain Hadoop activities.
  • Block size can increase Hadoop MapReduce performance by up to 21 percent.
  • The number of hard disk drives (HDD) attached to the data nodes can decrease the Hadoop job execution time by up to 20 percent for each HDD added, until storage-bus saturation is achieved.
  • Changing certain file system settings in the native OS increased Hadoop performance by up to 10 percent.
  • Changing certain default OS settings can increase Hadoop performance by up to 15 percent and increase the chance of completion of Hadoop workloads.
  • By adding certain Java options to the Hadoop JVMs, Hadoop performance can increase by up to five percent.
  • By changing certain Hadoop parameters, performance can increase by up to 12 percent in MapReduce jobs.

For much more detailed information, please consult the whitepaper.

Save up to 88% in storage requirements and address a broad set of VDI use cases with Dell DVS Enterprise for Windows Server 2012R2

$
0
0

By Senthil Baladhandayutham, Solutions Development Manager, Desktop Virtualization Solutions Group, Dell.

The introduction of Microsoft Windows Server 2012R2 has brought several important improvements to Remote Desktop Services (RDS) and VDI solutions leveraging Microsoft Hyper-V. Some of the notable additions include active Data Deduplication on running VMs, RDP clients for mobile endpoints, improved RemoteFX graphics and support for Microsoft Windows 8.1 VDI  VM. In this blog, we will walk you through the integration of Windows Server 2012R2's feature set with Dell Desktop Virtualization Solution (DVS) Enterprise for Windows Server 2012R2 solution.

Dell DVS Enterprise for Windows Server 2012R2 is designed with simplicity and affordability in mind to meet the broadest set of VDI use cases with the emphasis on total cost of ownership. Whether your I.T department is planning for a pilot, a small office or a large-scale datacenter VDI implementation, we suggest three deployment options, or PODs, running on Microsoft Windows Server 2012R2 in order to meet your needs.

  • DVS 10-seat trial kit: tower based option, ideal for POC and small scale deployments.
  • Dell PowerEdge VRTX POD: ideal for small office/branch office scenarios.
  • Scalable Datacenter POD: ideal for large scale Enterprise deployment requiring a wide variety of user types.

All the POD’s were validated with latest Intel Ivy Bridge processors running Microsoft Windows Server 2012R2 Session Host VMs and Windows 8.1 VDI VMs.

10-Seat Trial-Kit:

The   DVS 10-seat trial kit is ideal for POC and small office deployments up to 10 users. The trial kit comes with a PowerEdge T110ii tower server bundled with Dell thin clients to support a quick and easy deployment.

PowerEdge VRTX POD:


The PowerEdge VRTX POD comes with 2- blades or 4-blades configuration. Nodes are clustered using Microsoft cluster and the storage is shared among the nodes using Microsoft CSV (Cluster Shared Volume) and Dell Shared PERC (PowerEdge RAID controller) technology. Pooled VDI and remote desktop session host (RDSH) deployments leverages CSV caching feature to reduce the boot storm and login storm IOPS experienced in a typical VDI deployment scenario. The PowerEdge VRTX POD is best suited for small departmental deployments or remote branch office deployments where sophisticated IT skillset is not required to manage the infrastructure.

Dell Datacenter POD:

The  Dell Datacenter POD (or DVS Enterprise) comprises the best of breed of Dell datacenter components (Dell PowerEdge R720 rack server, Dell EqualLogic 61xx series storage arrays and Dell  Force10 S55 networking bundled  with Windows Server 2012R2,  providing the ideal platform to solve the broadest set of VDI deployment  scenarios (RDSH, pooled, persistent, graphics and unified communication (UC)) at an affordable price point. 
Let us provide an example of how much you can save on storage requirements with Personal desktops. Personal desktops with persistent profiles are typically the most expensive VDI option to deploy due to the need for each user VM to be unique and require its own disk space. The Data center POD was optimized to deliver personal desktops at an affordable price by hosting such Personal desktops on a Dell PowerEdge R720 server running Windows Server 2012R2 File Server Role with built-in data deduplication feature and  Dell EqualLogic hybrid array 6100XS (SSD low-latency performance with high capacity spinning media HDD) serving  as the backend storage array. By utilizing native Windows data deduplication in our in-house testing, we observed a deduplication rate as high as 88%, translating directly into savings on the storage requirements. This capacity savings from data deduplication, combined with EqualLogic 6100XS ability to handle the higher IOPS, helps deliver Personal desktops at an affordable price.  

Dell Datacenter POD


In addition to the storage enhancements, the Datacenter POD ships with Dell PowerEdge R720 Servers powered with the latest generation of Intel IvyBridge processors and NVidia (Gird K1/K2) or AMD (Firepro S7000/S9000) graphic cards, making it a highly flexible server platform for deploying remote desktop sessions, RemoteFX shared graphics VM’s, Pooled or Persistent based  desktop VM’s.

Visit DVS Enterprise of Windows Server 2012 resources page to learn more about the solution.

Come and meet Dell and Microsoft VDI experts at Dell World in Austin, 11th-13th December 2013.

If you can’t attend, visit www.dell.com/virtualdesktop or contact us to learn more

Dell PowerEdge Servers and Windows Server 2012R2 - Hyper-V Storage Quality of Service

$
0
0

This blog post was originally written by Michael Schroeder, Thomas Cantwell and Aditi Satam.

Microsoft has added a number of new features and capabilities to Windows Server 2012R2, as well as updated many as well.

Dell PowerEdge servers, such as the Dell PowerEdge R620 and R720, support Windows Hyper-V, and Dell also offers the Hyper-V role installed (if selected during purchase) when a server has a factory-installed OS (http://en.community.dell.com/techcenter/b/techcenter/archive/2013/11/21/the-top-5-reasons-to-buy-a-dell-factory-installed-oem-operating-system-when-you-purchase-a-new-dell-server.aspx ).

Dell PowerEdge servers can take full advantage of these new capabilities. Storage QOS was developed to prevent any single VM or set of VMs from taking over all the storage bandwidth used by the VHDs in a server.

As you can see in the screen shot, under VM settings, you can select the Advanced Features option under hard drives to configure your storage QoS policy. Storage QoS is implemented on a per-virtual hard disk basis.

                   

 

Key benefits of Hyper-V Storage QOS:

  • From a hosting perspective, QoS can be used to classify storage SLAs for different levels of performance. For example, gold, silver and bronze configurations for different classes of VM tenants.
  • Storage QoS can prevent disruptive VMs from monopolizing the available storage capacity. This allows bandwidth available to the neighboring good citizen VMs.
  • Allows you to send alerts when minimum IOPS have not been met for a virtual disk.

Control of:

  • Minimum IOPS per virtual hard disk
  • Maximum IOPS per virtual hard disk

Ability to manage via:

  • Hyper-V Manager – see Advanced Features for each VHD when setting up the VM.
  • PowerShell

 Limitations:

  • Works at the VHDX layer.
  • Storage QoS is not available, if you are using shared virtual hard disks.
  • Cannot enable storage QoS in conjunction with differencing VHDs when the parent and child disks are on separate volumes.
  • Important! Any virtual hard disk that does not have a minimum IOPS value defined will default to 0. This means that VHD could end up resource-starved in some extreme cases.

 

Microsoft® Clustered Storage Spaces on Dell PowerEdge VRTX™ Support and Cluster Validation Warnings

$
0
0

This blog post was originally written by Paul Marquardt

Currently the Dell PowerEdge VRTX™ platform does not support Microsoft Storage Spaces.  As discussed here, Storage Spaces is not supported using storage behind a RAID controller, and shared storage on a VRTX chassis is used via the SPERC controller.

This configuration can cause confusion when trying to install and configure Microsoft failover clustering services. During the cluster validation, you may receive a yellow warning stating that the system storage does not support SCSI-3 Persistent Reservations commands needed by clustered storage pools that use the Storage Spaces subsystem. This is only a warning and does not inhibit the capability of the VRTX platform to support Microsoft failover clustering.

Warning Message in Cluster Validation Report:

Disk X does not support SCSI-3 Persistent Reservations commands needed by clustered storage pools that use the Storage Spaces subsystem. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Contact your storage administrator or storage vendor for help with configuring the storage to function properly with failover clusters that use Storage Spaces.

When you see a yellow warning during the cluster validation process, the warning does not indicate that the cluster will fail to build. The warning simply provides information on the system capability of different clustered features. Errors which are shown in red during the cluster validation report are issues that cause the failover cluster to fail the validation and build process.

We Need Your Feedback Regarding A Brand New Dell Online Support Experience

$
0
0

 We areexcited to announce the release of a brand new Dell Online Support Experience.   Dell’s customers in all English speaking countries are the first to enjoy this brand new online support experience. We designed this new experience with our customers in mind and the new site is a direct result of what our customers have been asking for - a cleaner, simpler and more intuitive Dell Support Site where all of their online support needs are within reach.This new landscape  delivers solutions to customers based on how they think and, over time, the current Dell Support Site will evolve to this new landscape.

 

Please join us tomorrow, Tuesday 12/10 at 2PM, for the DellTechCenter.com TechTuesday Chat to learn more about Dell’s New Online Support Experience. Enter the chat roomhere.

 

What do you think of the new Dell Online Support Experience? Specifically:

  • Did you enter a Service Tag/ESC to get customized Support?
    • If not, Why not?
  • Were you able to locate the necessary Support solutions for your product?
    • If not, Why not
  • How did you attempt to locate Support solutions for your product?
    • Entering a Search term
    • Accessing the “Research a Topic” section
    • Entering a Service Tag/ESC or making a selection from the Product Selector
    • Logging into My account

We want to hear from you, so we can further refine the Dell Online Support Experience as we offer the experience to the rest of the world.

 

The site’sgoal is to better simplify and improve our customers’ online support experience with relevant information and tools to self-service. The siteoffers our customers a guided path through diagnostics, troubleshooting, and resolution – all online.  It also offers a prominent position to log into accounts and an Inline Product Selector that provides our customers with a quicker and easier route to Dell’s product catalog. Other features include:

  • Support home page
    • Updated Service Tag entry box
    • Prominent account login
  • Support dashboard
    • Product health information
    • Product details
    • Inline manuals
    • Inline config
  • Get started/Maintain Schedule
    • Get started content
    • Support history
    • Online diagnostics history
    • Static maintenance scheduled
  • Research a topic
    • Trending issues and top solutions prioritized
    • Support videos
    • Troubleshooting tips
  • Get drivers
    • Access to the most up-to-date drivers
    • Upgraded categories
  • Upgrade
    • Easy-to-use categories provide a direct path to parts and upgrades


Viewing all 1001 articles
Browse latest View live




Latest Images