Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

Driver Pack Catalog now available!

$
0
0

Authored by Sujeet Kumar

 

In an Enterprise heterogeneous environment, having various models and generations of Dell systems (laptops, desktops, tablets, and so on), manually downloading the latest Driver Pack  over the Internet can prove to be challenging.

To overcome this, Dell introduces the Driver Pack Catalog beta release that consolidates the metadata about all latest Driver Packs at one location  enabling customers to “do more”.

 

The Driver Pack Catalog is an XML file containing metadata about the  new Dell systems and WinPE Driver Packs. This XML file is compressed as a Microsoft Cabinet (.CAB) file and digitally signed for delivering it to the customers over the internet.

 

The Driver Pack Catalog has access to the latestDriver Packs (.CAB files) that can be automatically downloaded from the internet.

 

The key features and information available in Driver Pack Catalog are:

 

  • Metadata about all the latest Driver Packs at a single location.
  • System (Win) and WinPE Driver Packs are supported.
  • Driver Pack applicability  depends on BIOS or System ID, operating system (Microsoft Windows major and minor Versions) and type (WinPE or Win).
  • Metadata such as version, download links, MD5 Hash, .CAB file size and so on are associated with each Driver Pack in the Catalog to allow easy downloads.
  • Dell certifies the Catalog contents  for the following:
    • o   Latest Driver Pack availability in the Catalog.
    • o   All Driver Pack in Catalog are available for download.
    • o   For a given BIOS or System ID and Operating System, a Driver Pack of both Types (WinPE or Win) are present in the Catalog.

Note: Dell controls the kind of systems that are supported in Catalog. At present Latitude, Optiplex, Precision and Tablet are supported.

 

Dell releases Driver Packs every month as per the release schedules.   The Driver Pack Catalog available for download using any of these URLs:

http://downloads.dell.com/catalog/DriverPackCatalog.cab

/ftp://downloads.dell.com/catalog/DriverPackCatalog.cab

/ftp://ftp.dell.com/catalog/DriverPackCatalog.cab.

 

Details about Driver Pack Catalog and How to use guide are in the Driver Pack Catalog wiki page on Dell TechCenter.  Please provide your feedback in the Driver Pack Catalog discussion thread


Dell’s Secure Mobile Access Solution Bolster’s Cyber Security

$
0
0

 While reading through the blog article titled, Has mobile computing had a positive impact on cybersecurity? in NetworkWorld, I thought Jon Oltsik made some interesting points around mobile security and changes related to how companies deal with Bring Your Own device (BYOD) and the management of data both on the end point device and in flight. There are few points I want to provide as a response.

I agree that network administrators need policy enforcement, and that this should include the ability to not only identify the user, but also determine the device type and its current security state to ensure proper access is granted. This provides the ability to limit access not only to certain users, but also to certain devices or device types. While I’m not a huge fan of MDM to solve the BYOD issue, I do believe that introducing a container such as the Dell Enterprise Mobile Management solution can help to ensure that data in rest on that specific device is secure. It also ensures that personal data is not intermingled with business data.

To further strengthen Dell’s position in the mobile access market, we introduced the Dell Secure Mobile Access (SMA) OS 11.0 for Dell SonicWALL Secure Remote Access appliances at Interop 2014 in Las Vegas. New features in this release provide additional controls for network administrators not only to identify the user and device, but also to control which apps can be used to access corporate resources, including Dell EMM or third-party apps such as Virtual Desktop (VDI). This new control enables the network administrator to create a list of trusted applications for employees to use for business. Additionally, SMA 11.0 offers powerful Mobile Device Policy authorization to ensure BYOD users legally comply with corporate access and data policy.


While there is no perfect solution to solve all the challenges around allowing access to business data through a personal device, we feel that Dell is in a solid position to further solve the challenges with the introduction of SMA 11.0 along with continued integration of the Dell EMM solution. The combined solution provides the ability not only to secure data in transit, but also to ensure that data is secure on the mobile device.

Dell's end-to-end Connected Security solutions, from the endpoint to datacenter to the cloud, include Dell Enterprise Mobility Management (EMM), Dell SonicWALL network security and secure mobile access. Coupled with Dell Data Protection and Encryption (DDPE), these solutions reduce business risks and protect organizations from the ever evolving global threats.­­­

Ubuntu Server 14.04 LTS released by Canonical

$
0
0

The latest version of Ubuntu Server, 14.04 LTS (a.k.a. Trusty Tahr) is now available from Canonical. Released on April 17, 2014, Ubuntu 14.04 LTS includes a rebase to the 3.13 kernel, the latest OpenStack release (“Icehouse”) and an update to the latest Juju cloud deployment tools. See the Release Notes for full details.

Cloud deployment tools

Ubuntu 14.04 LTS includes a set of tools to easily deploy and orchestrate services in the cloud. Canonical’s MAAS (Metal-As-a-Service) is used to deploy Ubuntu 14.04 LTS on bare metal and is used together with Juju to deploy services on top of them. Juju can also be used to deploy services to popular cloud providers such as Amazon AWS and Microsoft Azure.

The enhanced collaboration efforts between Dell and Canonical over the last year have ensured that Ubuntu LTS releases and Canonical’s cloud deployment tools continue to run smoothly on Dell PowerEdge and PowerEdge-C servers.

Hardware Certification

Dell has worked with Canonical to test and certify PowerEdge & PowerEdge-C servers with the Ubuntu 14.04 LTS release. Customers looking to deploy Ubuntu Server in their environment can choose this release with confidence knowing that LTS releases are supported by Canonical for 5 years.

We are proud to announce that we have certified all 12th Generation and all shipping 11th Generation PowerEdge servers with Ubuntu Server 14.04 LTS. For a complete list of all certified PowerEdge servers, go here.

OpenManage for Ubuntu

The Dell Linux Engineering team has released a build of OpenManage 7.4 for Ubuntu Server. Please note this build is provided without any warranties or support. Though our team tested on a number of PowerEdge servers without issues, we could not have covered every possible use case scenario. For installation instructions, refer to this blog

Where to get support

Ubuntu support contracts from Canonical are available from your Dell sales representative or directly from Canonical through the Ubuntu Advantage program. In addition, best-effort support from Dell is available with your Dell ProSupport contract.

For questions and general discussion, contact our mailing list Linux-PowerEdge where Dell engineers, support teams and customers discuss Linux on PowerEdge servers. We welcome your participation and feedback.

Dell Open Source Ecosystem Digest #43

$
0
0

Big Data:
Cloudera: Making Apache Spark Easier to Use in Java with Java 8
One of Apache Spark‘s main goals is to make big data applications easier to write. Spark has always had concise APIs in Scala and Python, but its Java API was verbose due to the lack of function expressions. With the addition of lambda expressions in Java 8, we’ve updated Spark’s API to transparently support these expressions, while staying compatible with old versions of Java. This new support will be available in Spark 1.0. Read more.

Cloudera: Hello, Apache Hadoop 2.4.0
Hadoop 2.4.0 includes myriad improvements to HDFS and MapReduce, including (but not limited to): ACL Support in HDFS — which allows, among other things, easier access to Apache Sentry-managed data by components that use it (already shipping in CDH 5.0.0), native support for rolling upgrades in HDFS (equivalent functionality already shipping inside CDH 4.5.0 and later), usage of protocol-buffers for HDFS FSImage for smooth operational upgrades… See full release notes.

Datameer: How-to: Import Excel Data (Even Existing Formula Results!) into Datameer
For this week’s Favorite Feature Friday, I wanted to show how easy it is to import Excel files into Datameer. Not only does Datameer read its raw data, but it also evaluates any existing formulas in the Excel sheet to pull those results in as well. Read more.

DevOps:
Ravello: Choosing a Central Logging Tool: 5 Important Features, 6 Optional Tools
We need to continually monitor our servers, production and dev environments. As our environments grow and scale out it becomes increasingly difficult to debug failures and crisis analysis requires multi SSH-ing to different servers. Therefore we wanted to be able to view all the logs for all our servers from one single entry point. We also wanted to be notified of abnormal activity in our logs, because we can’t sit and watch them all day long. Read more.

OpenStack:
Ben Nemec: My Devtest Workflow
I said in a previous post that I would write something up about my devtest workflow once I had it nailed down a bit more, and although it's an always-evolving thing, I think I've got a pretty good setup at this point so I figured I'd go ahead and write it up. Instead of including scripts and such here, I'm just going to link to my Github repo where all of this stuff is stored and reference those scripts in this post. Read more.

Cloudscaling: The Top Three Things To Bookmark for the Juno Summit
I can’t believe that it’s less than a month before the the upcoming Juno Summit. As you start putting together your plans for the Summit, I wanted to highlight some items to look forward to from the Cloudscaling team. Read more.

Cloudwatt: Deploy Horizon From Source With Apache And Ssl
Some companies may deploy OpenStack clouds but without the Horizon Dashboard interface, and therefore you may wish to deploy your own horizon instance, either on a hosted VM of the OpenStack infrastructure, or why not on your own computer? Read more.

ICCLab: Floating IPs management in Openstack
Openstack is generally well suited for typical use cases and there is hardly reasons to tinker with advance options and features available. Normally you would plan your public IP addresses usage and management well in advance, but if you are an experimental lab like ours, many a times things are handled in an ad-hoc manner. Recently, we ran into a unique problem which took us some time to figure out a solution. Read more.

Loic Dachary: HOWTO migrate an AMI from Essex to a bootable volume on Havana
A snapshot of an Essex OpenStack instance contains an AMI ext3 file system. It is rsync’ed to a partitioned volume in the Havana cluster. After installing grub from chroot, a new instance can be booted from the volume. Read more.

Opensource.com: Giving rise to the cloud with OpenStack Heat
Setting up an application server in the cloud isn't that hard if you're familiar with the tools and your application's requirements. But what if you needed to do it dozens or hundreds of times, maybe even in one day? Enter Heat, the OpenStack Orchestration project. Heat provides a templating system for rolling out infrastructure within OpenStack to automate the process and attach the right resources to each new instance of your application. To learn more about Heat, say hello to Steve Baker. Steve served as the Program Technical Lead (PTL) for the Heat project during the Icehouse release cycle and is a senior software engineer at Red Hat. Read more.

Opensource.com: How to govern a project on the scale of OpenStack
How an open source project is governed can matter just as much as the features it supports, the speed at which it runs, or the code that underlies it. Some open source projects have what we might call a "benevolent dictator for life." Others are outgrowths of corporate projects that, while open, still have their goals and code led by the company that manages it. And of course, there are thousands of projects out there that are written and managed by a single person or a small group of people for whom governance is less of an issue than insuring project sustainability. Read more.

Rackspace: Going Hybrid With vSphere And OpenStack
“OpenStack is on the cusp of major adoption.” How many times have you heard a vendor or analyst say that or some variation of it in the past 12 months? The fact is that many companies are still evaluating OpenStack and trying to determine how it should fit into their overall IT strategies. That should not be a surprise given the disruptive nature of a technology like cloud computing. Read more.

Rackspace: Why We Craft OpenStack
As OpenStack Summit Atlanta fast approaches, we wanted to dig deeper into the past, present and future of OpenStack. In this video series, we hear straight from some of OpenStack’s top contributors from Rackspace about how the fast-growing open source project has evolved, what it needs to continue thriving, what it means to them personally, and why they are active contributors. Read more.

Red Hat: What’s New in Icehouse Storage
The latest OpenStack 2014.1 release introduces many important new features across the OpenStack Storage services that includes an advanced block storage Quality of Service, a new API to support Disaster Recovery between OpenStack deployments, a new advanced Multi-Locations strategy for OpenStack Image service & many improvements to authentication, replication and metadata in OpenStack Object storage. Read more.

Red Hat: An Icehouse Sneak Peek – OpenStack Networking (Neutron)
Starting in the Folsom release, OpenStack Networking, then called Quantum, became a core and supported part of the OpenStack platform, and is considered to be one of the most exicting projects – with great innovation around network virtualization and software-defined networking (SDN). The general availability of Icehouse, the ninth release of OpenStack, is just around the corner, so I would like to highlight some of the key features and enhancements made by the contributors in the community to Neutron.Read more.

Red Hat: The Road To High Availability for OpenStack
Many organizations choose OpenStack for it’s distributed architecture and ability to deliver Infrastructure-as-a-Service environment for scale-out applications to run on top of it, for private on premise clouds or public clouds. It is quite common for OpenStack to run mission critical applications. OpenStack itself is commonly deployed in Controller/Network-Node/Computes layout where the controller runs management services such as nova-scheduler that determines how to dispatch compute resources, and Keystone service that handles authentication and authorization for all services. Read more.

Steve Hardy: Heat auth model updates - part 2 "Stack Domain Users"
As promised, here's the second part of my updates on the Heat auth model, following on from part 1 describing our use of Keystone trusts. This post will cover details of the recently implemented instance-users blueprint, which makes use of keystone domains to contain users related to credentials which are deployed inside instances created by heat. Read more.

Created a BYOD Strategy? Now what?

$
0
0

It’s that time of year again.  The “boys of summer” begin another season with high expectations and the goal of appearing in baseball’s post season.  All thirty Major League Baseball teams spend the off-season developing a plan to gain the competitive advantage and improve their position over last year. They discuss and create a strategy to fill voids to increase their chances of making the play-offs.  Unfortunately, due to budgetary limitations, inability to keep and attract free agents, or the lack of a realistic plan, many major league teams will not be able to successfully execute on their strategy.

The same is true with your BYOD strategy. Enabling employees to utilize their personal device to access corporate network resources seems like a great way to reduce capital expenses by allowing employees to provide their own mobile device, as well as increase worker productivity. Seems like a great plan, doesn’t it. But not being able to control a device, such as a smartphone or tablet, the company doesn’t own, is one of the challenges facing IT departments. However, it can be addressed with solutions such as App VPN and container technologies.

But the real obstacle enterprises face today is not technical, but legal. The question is, “how do companies ensure that employees understand and acknowledge the BYOD policy provided and approved by their legal department?”  Implementing consumer rights protection as identified by local jurisdictions is critical to executing a successful BYOD strategy. Today, companies have been forced to improvise by implementing manual systems either through email, submission of a physical form, or the use of a portal to get employees to acknowledge the policy.

The best way to ensure acknowledgement and acceptance of the BYOD policy is to integrate the process in the workflow. The technology incorporated in the Dell Secure Mobile Access (SMA) solution recently announced at Interop 2014 integrates a workflow that requires employees to accept a company’s BYOD policy prior to getting access to the corporate network.  In fact, unlike other vendor’s attempts to address the compliance issue, the employee’s personal device is not authorized to access the corporate network until the user accepts the corporate policy. Dell SMA protects the company from legal ramifications as it provides the means for the company to push down the policy upon initial and access as well as any subsequent changes to that policy.  And since BYOD compliance differs based on local jurisdictions, Secure Mobile Access enables the administrator to organize policies based on groups or communities thereby assuring the pertinent agreement are accepted.


Just as in major league baseball, the success of one’s strategy is dependent upon the ability to implement and execute.  Sometimes the tools are readily available. And when they are, enterprises need to take advantage and incorporate them quickly and effectively. Dell’s new Secure Mobile Access solution is the advantage enterprises need to maximize employee’s productivity and truly benefit from the BYOD trend.

Join our webinar on April 22 to learn more about Dell Secure Mobile Access: Click here.


Accessing CMC Hardware and Chassis Logs of Dell PowerEdge VRTX Chassis

$
0
0

This blog post was originally written by Aditi Satam and Syama Poluri.

The Dell VRTX Chassis is a converged infrastructure solution that includes four separate compute nodes, network infrastructure, and a shared storage subsystem. Dell Chassis Management Controller (CMC) helps manage, monitor, configure and deploy different hardware components in the Dell PowerEdge VRTX Solution.

The VRTX platform offers a robust mechanism of monitoring and troubleshooting the chassis. You can retrieve event information from the hardware logs that are available from within CMC User interface. If you want to access any chassis logs you would need to log in to CMC and then save an xml file. Alternatively, if you would like to access these logs directly from Windows operating system, instead of saving a separate file from CMC, you can do so using Windows PowerShell and WSMAN CIM capability. Step one would be to establish a CIM Session to the CMC. Once the session is established, you can retrieve the logs. You can then pipe these logs and save them as an XML or in clear text format.

The PowerShell commands below walk you through all the steps – Establishing a CMC session, retrieving the chassis, hardware and software logs and clearing the logs.

#Enter the CMC IP Address

$ipaddress=Read-Host -Prompt "Enter the CMC IP Address"

#Enter the Username and password

$username=Read-Host -Prompt "Enter the Username"

$password=Read-Host -Prompt "Enter the Password" -AsSecureString 

$credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$password 

# Setting the CIM Session Options

$cimop=New-CimSessionOption -SkipCACheck -SkipCNCheck -Encoding Utf8 -UseSsl

$session=New-CimSession -Authentication Basic -Credential $credentials -ComputerName $ipaddress -Port 443 -SessionOption $cimop -OperationTimeoutSec 10000000

echo $session

Fig 1. CIM Session Established

  

# Retrieve Chassis, Hardware and Software logs 

$chassislogentry=Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisLogEntry" -Namespace root/dell/cmc -ErrorAction Stop

$chassislogentry 

($chassislogentry|ConvertTo-Xml).Save(".\chassislog.xml" 

 

$hwlogEntry=Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/Dell_HWLogEntry" -Namespace root/dell/cmc -ErrorAction Stop

$hwlogEntry 

 

$swlogEntry=Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/Dell_SWLogEntry" -Namespace root/dell/cmc -ErrorAction Stop

$swlogEntry

Fig 2. CMC Hardware (CEL) Logs

  

#Clear Chassis and Hardware Log 

$chassisrecordlog=Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisRecordLog" -Namespace root/dell/cmc -ErrorAction Stop

$chassisrecordlog

Invoke-CimMethod -InputObject $chassisrecordlog -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/DCIM_ChassisRecordLog" -CimSession $session -MethodName ClearChassisLog  

 

$hwlog = Get-CimInstance -CimSession $session -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/Dell_HWLog" -Namespace root/dell/cmc -ErrorAction Stop

Invoke-CimMethod -InputObject $hwlog -ResourceUri "http://schemas.dell.com/wbem/wscim/1/cim-schema/2/Dell_HWLog" -CimSession $session -MethodName ClearLog

Additional Resources:

Using Microsoft Windows PowerShell CIM Cmdlets with Dell iDRAC

For more related articles visit Managing Dell PowerEdge VRTX using Windows PowerShell

Dell TechCenter Rockstar Interview #1: Andrea Mauro

$
0
0

In this interview series we will introduce IT Professionals selected for this year’s Dell TechCenter (DTC) Rockstar Program. This program recognizes independent experts and enthusiasts for their significant positive impact on Dell TechCenter, Blogs and Social Media when discussing Dell. You can find a list of all our Rockstars here. Our first interviewee is Andrea Mauro.


Dell TechCenter: Andrea, what is your domain of expertise?

Andrea Mauro: I mainly focus on virtualization and related topics, including storage and networking. I am also a VMUG IT Co-Founder and board member as well as a VMware vExpert 2010, 2011, 2012, 2013. Some of my certifications are: VCP-DV/DT/Cloud, VCAP-DCA/DCD, VCDX, MCSA, MCSE, MCITP, CCA.

Q: What are the most exciting trends in your area?

A: The virtualization community pays a lot of attention to the "Software Defined" aspects. And for sure one of the coolest topics of this year is the Software Defined Storage and all storage aspects related to virtualization.

Q: Can you point us to resources you find particularly valuable?

A: It’s good to start with each vendor. As for content around multi-vendor or generic approach it’s little more complicated to find useful resources and in those cases you can find several blogs. A good set of virtualization and storage blogs are PlanetV12n and vLaunchPad. There are also several good resources on Dell TechCenter on virtualization and storage. Usually you can find some amazing information in the Wiki, like (just as an example) this great EqualLogic resource.

Q: How do you engage with the IT community?

A: I usually work on multiple threads and approaches: blog post are good for sharing experience or my thoughts on certain topics, but also to recap or just capture some information. Communities, discussions, events are a great opportunity to exchange and share information and to learn more.

Q: What are the most cutting edge Dell products – and why?

A: Dell Fluid Cache was an interesting announcement during the last Dell Enterprise Forum and it could potentially bring some changes in storage world. There is already software available in the marketplace for implementing a good host side cache, but only few offer a real write back support. With Dell entering the stage it will become more interesting to see how this area will evolve. Of course this would become the best complementary part for the Compellent Storage Center, but it will be interesting to see if Dell Fluid Cache will also adapt for other storage solutions.

Q: Andrea, thank you so much!

Andrea Mauro’s Twitter handle and Blog.

Dell PowerEdge R920 with Express Flash NVMe PCIe SSD Claims a New World Record

$
0
0

Continuing the trend of producing world record performance in the Enterprise Resource Planning (ERP) workspace, Dell has published another top 4-socket score on the SAP SD Two-Tier Benchmark*. With this submission, Dell also introduces our new solid-state storage solution, Dell PowerEdge Express Flash NVMe PCIe SSD, a high-performance, solid-state device. This technology combined with SUSE Linux Enterprise Server running SAP Sybase Adaptive Server Enterprise 16, and the Intel Xeon Processor E7-4800 v2 product family reasserts Dell PowerEdge R920 as a leadership enterprise solution provider in the 4-socket arena. Let’s take a closer look at how Dell solutions stack up against similar competitive platforms in the ERP workspace.

   

SAP SD Two-Tier Benchmark System Description

System

Number of Benchmark Users

Processor Type

Operating System

Database Management System

ERP Release

Dell PowerEdge R920 with NVME

25,451

Intel Xeon E7-4890 v2

SUSE Linux Enterprise Server 11

SAP Sybase ASE 16

SAP Enhancement Package 5 for ERP 6.0

IBM System x3850 X6

25,000

Intel Xeon E7-4890 v2

Windows Server 2012 Standard Edition

IBM DB2 10

SAP Enhancement Package 5 for ERP 6.0

HP ProLiant DL580 Gen8

24,450

Intel Xeon E7-4890 v2

Windows Server 2012 Datacenter Edition

Microsoft SQL Server 2012

SAP Enhancement Package 5 for ERP 6.0

Dell PowerEdge R920

24,150

Intel Xeon E7-4890 v2

SUSE Linux Enterprise Server 11

SAP Sybase ASE 16

SAP Enhancement Package 5 for ERP 6.0

 * Benchmark results based on best SAP SD Two-Tier results published as of April 2014. For the latest SAP SD Two-Tier results, visit: http://global.sap.com/solutions/benchmark/sd2tier.epx .

This latest world record performance in the SAP SD two-tier benchmark is just one example of Dell PowerEdge R920 leadership. The SAP SD two-tier workload has been developed by SAP and its partners to determine both hardware and database performance of SAP applications and components. The benchmark itself is a simulation of a SAP Sales & Distribution scenario. This simulation is carried out by multiple clients (companies) with users making orders concurrently for a sustained period of time with a predetermined set number of users. The metrics quoted by the benchmark are only considered successful when the host system exhibits the ability to sustain a maximum number of users working concurrently while delivering sub-second average response times. In the case of this discussion, we are specifically looking at the metric “number of benchmark users” as our measurement of throughput.

As promised in my last blog posting, this is the second chapter in this leadership story for Dell and the PowerEdge R920. Be on the lookout for more stories of how Dell and our PowerEdge portfolio help power businesses and communities.

 

To view this and other Dell World Records, please visit the Intel World Record Page: http://www.intel.com/content/www/us/en/benchmarks/server/xeon-e7-summary.html

 To view the entire field of SAP SD two-tier benchmarks, please visit the SAP SD Standard Application Benchmark site: http://global.sap.com/solutions/benchmark/sd2tier.epx?num=200


Dell SonicWALL Next Generation Firewall protection against Heartbleed (CVE-2014-0160)

$
0
0

First, your Dell SonicWALL firewall itself is not vulnerable and, if Intrusion Prevention was active, has been protecting your network from the Heartbleed vulnerability since April 8th.

Now that this is out of the way, let’s look at what a Dell SonicWALL firewall can do to defend your network against the Heartbleed attack

A quick background on Heartbleed

On April 7th, 2014, the Heartbleed vulnerability (CVE-2014-0160) was publically disclosed in the widely used OpenSSL library at the same time as a patch release to OpenSSL 1.0.1 was made available.  The vulnerability existed in the heartbeat code of the TLS protocol (hence the “Heart” part of the name), which was added to the OpenSSL codebase more than two years ago.  The relevant heartbeat code did not perform proper length validation of the payload field, allowing an attacker to trick a vulnerable server into returning, and thus leaking, sensitive memory information in chunks up to 64Kbytes at a time (the “bleed” part of the name), leaving absolutely no trace of the attack on the server. 

Why is this leakage of 64Kbytes at a time such a big deal?  The chunks of server memory, which an attacker could request an indefinite number of times, have a high probability of containing sensitive information such as usernames, passwords, as well as private server keys.  For example, if a user “Bob” with a password “cake” unknowingly logs into a vulnerable web server, the attacker can get lucky and obtain a piece of memory containing “username=bob&password=cake”, thus leaking Bob’s password.  This type of data is encrypted and protected by the SSL/TLS protocol while it is in transit over the internet between a browser and the server.  However, once it is on the server, it is decrypted so that it can be interpreted by the web application (e.g. webmail, shopping form, admin interface).  Since the decryption occurs in the memory of the OpenSSL library, this data remains in memory local to the OpenSSL decryption code and becomes vulnerable to an attack that is able to steal the contents of server memory.  That’s how the Heartbleed vulnerability came into existence.

The vulnerability affects servers and appliances that used OpenSSL 1.0.1 for secure communication with a client application, web browsers being the most common example of such an application.   

Much has been written on the topic, so I’ll just point you to heartbleed.com, Bruce Schneier’s blog, NIST, and my favorite explanation at xkcd.

Preventing the attack at the perimeter with Intrusion Prevention

The good news is that this attack can be stopped at the network perimeter (or at an internal boundary) with a capable next-generation firewall (NGFW) or Unified Threat Management (UTM) firewall, buying you valuable time during which you can patch vulnerable servers and other infrastructure.  All Dell SonicWALL firewalls with an active Intrusion Prevention service have been serving as the first line of defense against the Heartbleed vulnerability since April 8th, dropping the malicious traffic at the network boundary before it has a chance to hit webservers or exposed appliances.  The Intrusion Prevention protection analyzes SSL/TLS traffic entering the network and drops connections that contain:

-         Malicious standard TLS heartbeats

-         Malicious encrypted TLS heartbeats

It is not necessary to enable SSL Decryption (DPI SSL) on the firewall in order to block Heartbleed attacks, as the malicious heartbeat packets are in the SSL/TLS headers that can be observed without decrypting the SSL/TLS payload.  The attacks are covered by the following IPS signatures in the WEB-ATTACKS category, which are enabled automatically if you have prevention enabled for “High Priority” attacks:

  • 3616 OpenSSL Heartbleed Information Disclosure 1
  • 3638 OpenSSL Heartbleed Information Disclosure 2
  • 3652 OpenSSL Heartbleed Information Disclosure 3
  • 3653 OpenSSL Heartbleed Information Disclosure 4
  • 3661 OpenSSL Heartbleed Information Disclosure 5
  • 3663 OpenSSL Heartbleed Information Disclosure 6
  • 3744 OpenVPN Heartbleed Information Disclosure

It is important to understand that “encrypted” heartbeats are a way of obfuscating the heartbeat payload, and do not refer to TLS encryption, nor the SSL/TLS payload.

More details and statistics on these countermeasures can be found in the SonicAlert posted on April 9th: https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=668 as well as the SonicAlert posted on April 18th: https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=671

TLS Heartbeats – Are they even necessary?

I could argue that SSL/TLS heartbeats are not necessary at all. 

Heartbeats may be useful in SSL/TLS implementations over connectionless protocols such as UDP (DTLS). In such protocols, the heartbeat can help with more efficient communication by reducing the number of unnecessary session terminations/re-establishments.

However, given that a significant majority of SSL/TSL traffic is over TCP, these heartbeats are not essential to SSL/TLS performance over TCP. There are very few applications that rely on TLS heartbeats, and they can only be utilized when both the server and the client are using a version of OpenSSL that supports them.  In the absence of heartbeat support on the server, the SSL/TLS specification mandates a backoff to a heartbeat-less operation which covers the majority of SSL/TLS operation in the real world. 

Therefore, we’re also providing the ability to completely disable SSL/TLS heartbeats with the following signatures in the IPS engine:

  • 3706 OpenSSL Heartbeat 1
  • 3734 OpenSSL Heartbeat 2
  • 3735 OpenSSL Heartbeat 3

The bad guys don’t sit still – Heartbleed Malware

Immediately after a newsworthy event, malware writers usually capitalize on people seeking information by creating malicious webpages and malware that are likely to be found in common search queries.  In this particular case, it only took a few days for a piece of malware to surface that masquerades as the Heartbleed testing tool.  Fortunately, it was quickly caught by the Dell security research teams and on Friday, April 11th, we added protection against this malware. 

The malware is, predictably, called heartbleed.exe and upon execution, registers with a Command and Control (C&C) server and runs a Trojan on the infected systems. 

These are covered with the following Anti-Malware signatures, but I’m sure that this list will continue to grow:

  • GAV: Zacom.A (Trojan)
  • GAV: Zacom.A_2 (Trojan)
  • IPS: 3686 Zacom heartbleed malware activity 1
  • IPS: 3688 Zacom heartbleed malware activity 2

 

More details can be found at https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=669 and in the April 18th update https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=671

 

Advice on testing whether you’re vulnerable

As always, be very wary of running any executables that you download from the internet, especially small utilities that are not code-signed by a well-known organization.  

WARNING: There are many convenient websites that allow you to quickly test whether your externally facing servers (whether websites or appliances) are vulnerable. I recommend strongly *against*using those sites for testing since, you’re effectively self-declaring yourself as vulnerable to an entity that makes no guarantees of what it will do with that information.  

I’m certain well-meaning testing sites exist, but, as with the malware mentioned above, it doesn’t take long for tools with malicious intent to show up.  For example, it is not a difficult project to set up a website that tests for the Heartbleed vulnerability and logs submitted sites that it finds to be vulnerable. 

Therefore, think twice of what you leak about your public web presence.

The correct way to test whether you’re vulnerable is by using one of the many python scripts that are available on the internet as source code.  Even if you’re not a Python expert, a quick scan of the code can tell you whether the results of a scan are either logged or sent off somewhere. You can use these scripts against publically facing sites and internal servers with full confidence that your findings are not leaked or shared.  When using these Python scripts, make sure to get the source code and execute the source code as a script.  Never run pre-compiled code downloaded from the web.

 

Dell SonicWALL Customers: Checking for history of attacks:

If you’re running Dell SonicWALL GMS or Analyzer, you can check for attacks caught by your firewall by selecting

  • Ø  Firewall in question
  • Ø  Intrusions (Detected or Blocked)
  • Ø  Appropriate date range

You will be presented with a view similar to the following

 

Dmitriy Ayrapetov

Director, Product Management, Network Security

PS: There were some non-firewall Dell SonicWALL products that were vulnerable and have been patched.  If these appliances were behind a Dell SonicWALL firewall with Intrusion Prevention, the risk was significantly lower.  Please see the security bulletin for more details. 

Dell Open Source Ecosystem Digest #44

$
0
0

Big Data:
Datameer: How-to: Incrementally Load Data from Relational Sources in Datameer
In case you haven’t noticed, we are always striving to maximize efficiency and ease for our customers. As such, one of my favorite features in Datameer is the ability to incrementally load data from relational sources. This allows you to keep fresh data in your Hadoop system while also putting minimal stress on the upstream relational system since you’re only adding changed or updated records to your environment. Read more.

DevOps:
Rackspace: Rolling Deployments With Ansible and Cloud Load Balancers
Recently a fellow Racker wrote a great post about zero downtime deployments. I strongly believe in the principals he described. In his example, he used Node.js to build a deployment script to accomplish the goal, and I want to explore how this could be done with Ansible and the Rackspace Cloud. Read more.

OpenStack:
Adam Young: Configuring mod_nss for Horizon
Horizon is the Web Dashboard for OpenStack. Since it manages some very sensitive information, it should be accessed via SSL. I’ve written up in the past how to do this for a generic web server. Here is how to apply that approach to Horizon. Read more.

Adam Young: MySQL, Fedora 20, and Devstack
Once again, the moving efforts of OpenStack and Fedora have diverged enough that devstack did not run for me on Fedora 20. Now, while this is something to file a bug about, I like to understand the issue to have a fix in place before I report it. Here are my notes. Read more.

Adam Young: Packstack to LDAP
While Packstack makes it easy to get OpenStack up and running, it does not (yet) support joining to an existing Directory (LDAP) server. I went through this recently and here are the steps I followed. Read more.

Andreas Jaeger: OpenStack Icehouse released - and available now for openSUSE and SUSE Linux Enterprise
OpenStack Icehouse has been released and packages are available for openSUSE 13.1 and SUSE Linux Enterprise 11 SP3 in the Open Build Service. Read more.

Cody Bunch: Basic Hardening with User Data / Cloud-Init
The ‘cloud’ presents new and interesting issues around hardening an instance. If you buy into the cattle vs. puppies or cloudy application building mentality, your instances or VMs will be very short lived (Hours, Weeks, maybe Months). The internet however, doesn’t much care for how short lived the things are, and the attacks will begin as soon as you attach to the internet. Thus, the need to harden at build time. What follows is a quick guide on how to do this as part of the ‘cloud-init’ process. Read more.

Cody Bunch: Basic Hardening Part 2: Using Heat Templates
Specifying that as user-data every time is going to get cumbersome, as will managing your build scripts if you are using CLI tools and the like. A better way, is to build it into a Heat Template. This allows for some flexibility in both version controlling it as well as layering HOT templates to a desired end state (HOT allows for template nesting. Read more.

Docker: OpenStack – Icehouse release update
Today, we expect the release of OpenStack Icehouse. In March, we reminded readers that Docker will continue to have OpenStack integration in Icehouse through our integration with Heat. Of course, that remains true. Since then, however, much has happened to warrant an update. Read more.

James Page: OpenStack 2014.1 for Ubuntu 12.04 and 14.04 LTS
I’m pleased to announce the general availability of OpenStack 2014.1 (Icehouse) in Ubuntu 14.04 LTS and in the Ubuntu Cloud Archive (UCA) for Ubuntu 12.04 LTS. Users of Ubuntu 14.04 need take no further action other than follow their favourite install guide – but do take some time to checkout the release notes for Ubuntu 14.04. Read more.

Matt Fischer: Keystone – using stacked auth (SQL & LDAP)
Most large companies have an Active Directory or LDAP system to manage identity and integrating this with OpenStack makes sense for some obvious reasons, like not having multiple passwords and automatically de-authenticating users who leave the company. However doing this in practice can be a bit tricky. Read more.

Mirantis: OpenStack Icehouse is here: what do you want to know?
After 6 months of intensive development, OpenStack Icehouse was released today. With an increased focus on users, manageability and scalability, it’s meant to be more “enterprise friendly” than previous releases. Here are some of the quick facts from the OpenStack Foundation. Read more.

Mirantis: OpenStack Icehouse: More Than Just PR
Is your head swimming with Icehouse information yet? By now you have certainly heard that the ninth OpenStack’s version, Icehouse, has been released. The most prominent features, such as rolling compute upgrades, improved object storage replication, tighter integration of networking and compute functionality, and federated identity services, have gotten a lot of airtime, but with more than 350 implemented blueprints, there’s a lot more below the surface of this particular iceberg. Here at Mirantis, we’ve been doing a lot of work with Icehouse, and we’ll be going into some technical depth on many of the new features of Icehouse in our webinar on April 24. In the meantime we wanted to give you a taste of what’s available. Read more.

Opensource.com: OpenStack Icehouse brings new features for the enterprise
Deploying an open source enterprise cloud just got a little bit easier yesterday with the release of the newest version of the OpenStack platform: Icehouse. To quote an email from OpenStack release manager Thierry Carrez announcing the release, "During this cycle we added a new integrated component (Trove), completed more than 350 feature blueprints, and fixed almost 3000 reported bugs in integrated projects alone!" Read more.

Opensource.com: TryStack makes OpenStack experimentation easy
Not everyone has a spare machine sitting around which meet the requirements of running a modern cloud server. Even if you do, sometimes you don't want to go through the entire installation and setup process just to experience something as an end user. TryStack is a free and easy way for users to try out OpenStack, and set up their own cloud with networking, storage, and computer instances. Read more.

Rackspace: Why We Craft OpenStack (Featuring Rackspace Software Developer Ed Leafe)
As OpenStack Summit Atlanta fast approaches, we wanted to dig deeper into the past, present and future of OpenStack. In this video series, we hear straight from some of OpenStack’s top contributors from Rackspace about how the fast-growing open source project has evolved, what it needs to continue thriving, what it means to them personally, and why they are active contributors. Watch the video.

SUSE: OpenStack Icehouse – Cool for Enterprises
OpenStack Icehouse has arrived with spring, but this ninth release of the leading open source project for building Infrastructure-as-a-Service clouds won’t melt under user demands. This is good, since user interest in OpenStack seems to know no limits as evidenced by the three new and sixteen total languages in which OpenStack Dashboard is now available. Read more.

The OpenStack Blog: Open Mic Spotlight: Joshua Hesketh
An interview with Joshua Hesketh, a software developer for Rackspace Australia working on upstream OpenStack. Read more.

The OpenStack Blog: Open Mic Spotlight: Simon Pasquier
An interview with Simon Pasquier, an OpenStack Active Technical Contributor. Read more.

5 Questions to Ask Yourself About Protecting Your Data

$
0
0

 

Aside from the people you employ, data is your organization’s biggest competitive advantage. So, naturally, data should be treated as the highest value asset in the organization, right? Shockingly, this is far from the truth in many cases.

Considering the growing numbers of data breaches, penalties and reputational risks as stake, as well as with more employees mobile, using public clouds and accessing potentially sensitive data the problem becomes even more pronounced. IT has to protect data wherever it might be and achieve compliance without adding burden, while enabling end-user productivity and simplifying management.

Start by answering these 5 questions and you will be on your way to better protecting your data to better serve the business.

  1. Where is my data? What’s most at risk?
  2. How should data be categorized to minimize risk?
  3. How should access be established and enforced to ensure compliance?
  4. How can application access be ensured to minimize data loss?
  5. How can data be protected anywhere it goes?

To answer these questions, you need a lifecycle approach to protecting data: Identify> Classify> Govern> Enable> Encrypt.

Start with identifying data to determine what needs attention. For example, understanding “hot spots” of unstructured data that contain the most risk. Objects would be displayed as red> yellow> green for risk ratings, with each data block drillable to determine policies.

Once you have identified what needs attention, you move to the classification stage. Here you understand what sort of sensitive information is contained in unstructured data files and categorize and organize automatically the PII and whatever other data you have determined by policy. Based on policy, all assignments can then be delegated, re-assigned, managed, etc. – as part of an object lifecycle.

Step 3 is all about governance, giving access control to the business owners who actually know who should have access to which sensitive data, with the power to analyze, approve and fulfill unstructured data access requests to files, folders and shares across NTFS, NAS devices and SharePoint. Important here is the ability to go beyond the data to show what other entitlements the user has in the organization. Critical to step 3 is scheduling regular business-level attestation of access delivered with a 360-degree view of user access. Dell delivers capabilities in identifying, classifying and governing data in its Dell One Identity Manager product, and provides auditing in its ChangeAuditor family.

With growing numbers of remote and mobile users, it makes sense to next focus on enabling those users to access data securely through SSL VPN, like with Dell Secure Mobile Access. A secure mobile access solution should be delivered platform agnostically – authenticated users should be able to access allowed corporate applications and resources from any validated device.

The final step in ensuring data is protected is of course encryption. Look for a solution that protects data on any device, external media and in the cloud without overburdening your end-users. FIPS 140-2 level 3 compliance is a must.

Enabling users to access data, while protecting it and ensuring compliant usage of it doesn’t have to lead to an IT tradeoff. Start with this lifecycle approach to help you achieve your most stringent compliance and data security requirements and look for best-in-class connected solutions that provide the context and intelligence across all of these requirements.

Double Trouble: Critical IE vulnerability weeks after Windows XP End of Life

$
0
0

Update: on May 1st, Microsoft has released a patch for the IE critical vulnerability, including on the Windows XP systems.  The instructions in this blog still apply until you have fully patched all IE installations in your organization. https://technet.microsoft.com/en-us/library/security/ms14-may.aspx

April just got a lot tougher.  Closely following both the End of Life announcement of Windows XP and the much publicized Heartbleed vulnerability, the IT industry now faces another security challenge.  This time, it’s an Internet Explorer remote execution vulnerability that affects all versions of IE, a severe problem on its own that is exacerbated by the fact that technically, IE browsers on Windows XP will not get patched.

In this blog, we’ll discuss the vulnerability and how organizations can protect themselves immediately and going forward.  

Vulnerability background

On April 26th, Microsoft released Microsoft Security Advisory 2963983 that addresses a remote code execution vulnerability, CVE-2014-1776, in IE versions 6 to 11. A successful exploit of this vulnerability will cause arbitrary code to run in the context of a current user within IE.  At this time, Microsoft has not stated when a patch for the vulnerability will be available for supported Windows platform but more importantly it is likely that Windows XP PCs will not receive a patch due to the EOL on that platform. For a view of the broader security implications of Microsoft Windows XP end of support read our blog published in December.

Why is it a significant issue in the security industry?  Given that IE represents roughly a quarter of the browser share, this potentially exposes a large number of internet users’ computers to malware attacks.  Reports of malicious sites using the vulnerability to hijack PCs surfaced immediately upon publication of the vulnerability (ArsTechnica).   This is a very typical, and highly successful, method of obtaining access to company data utilizing readily available malware. 

Traditional Firewalls – Insufficient Again

Traditional stateful firewalls are blind to these attacks.  Malicious traffic utilizing this vulnerability used to attack end users inside a network appears as 100% legitimate traffic to stateful firewalls.

On the other hand, next-generation firewalls and unified threat management firewalls, as well as intrusion prevention systems, are designed to protect networks from such attacks. 

First Priority: Protect your network

Dell SonicWALL firewall customers that have the Intrusion Prevention Service enabled have been protected against this attack since Sunday, April 27th through an automatic update pushed out over the weekend with the following update:

  • IPS: 3787 Windows IE Remote Code Execution Vulnerability (CVE-2014-1776)

As the Deep Packet Inspection engine scans traffic returning to the client browser, the firewall spots and drops the code that triggers the vulnerability in IE.    

You can find more information on our SonicAlert page https://www.mysonicwall.com/sonicalert/searchresults.aspx?ev=article&id=674

As with all other Microsoft advisories, Dell SonicWALL is listed as one of the partners with protection on the Microsoft Active Protection Program (MAPP) page.  See:
http://technet.microsoft.com/en-us/security/dn568129

Second Priority: Control IE usage until all systems are patched

As a matter of preventative maintenance going forward, customers with Dell SonicWALL firewalls can use Application Control to identify and restrict IE traffic while the systems patched.  This could be especially useful for networks that still have Windows XP widely deployed, given that Microsoft at this point may not patch these systems.  By blocking internet access from Internet Explorer on these systems, network administrators can significantly reduce the security risk. 

Blocking Internet Explorer using Application Control can be accomplished by creating an application rule on the firewall which will restrict outgoing traffic based on browser identification.  In this particular example, IT administrators can use the following application signatures:

  • 10275   Microsoft Internet Explorer -- HTTP User-Agent MSIE 5.0
  • 10269   Microsoft Internet Explorer -- HTTP User-Agent MSIE 6.0
  • 10270   Microsoft Internet Explorer -- HTTP User-Agent MSIE 7.0
  • 10271   Microsoft Internet Explorer -- HTTP User-Agent MSIE 8.0
  • 10272   Microsoft Internet Explorer -- HTTP User-Agent MSIE 9.0
  • 10273   Microsoft Internet Explorer -- HTTP User-Agent MSIE 10.0
  • 10274   Microsoft Internet Explorer -- HTTP User-Agent MSIE 11.0

This can also be accomplished by selecting the broad “Internet Explorer” category instead of picking specific browser versions.

For customers with Dell SonicWALL firewalls, you can follow these steps to enable such a policy on your firewall. First, you must have Application Control licensed and enabled on your Dell SonicWALL firewall.  You can check whether it’s licensed under the System
->Licenses page

1)       The first step is to create a match “object” that will be used by the policy to decide which traffic to block.  In this case, we want to identify “Internet Explorer” traffic.

a.  Under System->Match Objects select “Add New Match Object”

            i.      Give the  Match object a name

            ii.      Select Application Signature List

            iii.      Select WEB-BROWSER

            iv.      Pick which versions of IE you would like to block


2)      The second step is to actually create a rule that will use this “Block IE” object to enforce a policy.

a.      Under System->App Rules, create a new policy

b.      Policy Type: “App Control Content”

c.      Match Object: select “Block IE” object to use in this policy.

d.      Make sure that the Action is set to “Reset/Drop”

You can make additional tweaks such as inclusion and exclusion groups through AD/LDAP integration. For example, you can enforce this policy for everybody in the company except IT.


At this point, all outgoing connections from the network using Internet Explorer will be blocked. 

You can confirm that the rule is functioning by trying to access any webpage on the Internet using IE.  The following will show up in the rules:


At this point, you can rest easy knowing that your users cannot access the internet with Internet Explorer. 

Dell TechCenter Rockstar Interview #2: Ajith George

$
0
0

In this interview series we will introduce IT Professionals selected for this year’s Dell TechCenter (DTC) Rockstar Program. This program recognizes independent experts, Dell customers and employees for their significant positive impact on Dell TechCenter, blogs and social media when discussing Dell. You can find a list of all our Rockstars here. Our second interviewee is Ajith George.


Dell TechCenter: Can you please explain to folks who are not experts in your area: What is your domain of expertise?

Ajith George: I am a Development Engineer working on Dell OpenManage Connections.  Mainly I am focusing on applications for integrating the Dell Devices in third-party consoles such as HP Operations Manager (HPOM), IBM Tivoli Netcool/OMNIbus, VMware vCenter etc. The TechCenter page Dell OpenManage Connections and Integrations covers the product details, you can also visit my profile at Dell TechCenter. I’ve been moved to the Dell Storage Engineering group recently and awaiting an exciting and challenging journey ahead!

Q: What are the most exciting trends in your area?

A: There is lot interest in converged infrastructure (server, storage and networking) and exciting product offerings and innovations around the same are mushrooming.

Q: Can you point us to resources you find particularly valuable?

A: The Dell OpenManage Connections and Integrations portfolio details at Dell OpenManage Connections and Integrations are particularly valuable for me.

Q: How do you engage with the IT community?

A: I participate in the in OpenManage Connections and Integration and System management forums. I also engage at third-party forums with regard to Dell integrations. Also authoring and publishing wiki articles, release blogs and white papers, reviewing and mentoring team members who want to publish and engage in communities is what is what I do on a regular basis.

Q: What are the most cutting edge Dell products – and why?

A: Dell PowerEdge VRTX CMC and Dell Fluid Cache Solution for SAN are products I find particularly exciting. I’ve recently moved to the Dell Storage group after spending 4 years with the Dell System Management team (Server Engineering group), and I believe there are more exciting and challenging storage products to explore.

Q: Ajith, thank you so much!

Dell Open Source Ecosystem Digest #45

$
0
0

Big Data:
Datadog: Monitoring NGINX Plus load balancing metrics
In a short 5 years, NGINX has gone from powering 0 to powering 1 in every 6 of the busiest websites on the Internet. Datadog uses NGINX and chances are, you do too. Datadog is pleased to announce the expansion of our current NGINX monitoring to support the additional features offered through NGINX Plus. Read more.

Cloudera: Using Apache Hadoop and Impala with MySQL for Data Analysis
Apache Hadoop is commonly used for data analysis. It is fast for data loads and scalable. In a previous post I showed how to integrate MySQL with Hadoop. In this post I will show how to export a table from MySQL to Hadoop, load the data to Cloudera Impala (columnar format), and run reporting on top of that. Read more.

DevOps:
Chef: Plan, Build, Run: Please Don't
Several months ago, McKinsey & Company released a report entitled “The enterprise IT infrastructure agenda for 2014” which offers strategic advice to CIOs and other IT managers for 2014. In the spirit of having a constructive dialogue, I have to call out some instances of questionable advice in this document that are incompatible with what we at Chef call the “New IT’ or the coded business. One line item stands out especially for my criticism: the recommendation that enterprises implement a “plan, build, run” organizational model for their infrastructure departments. Read more.

Rob Hirschfeld: DevOps Concept: “Ready State” Infrastructure as hand-off milestone
Working for Dell, it’s no surprise that I have a lot of discussions around building up and maintaining the physical infrastructure to run a data centers at scale. Generally the context is around OpenCrowbar, Hadoop or OpenStack Ironic/TripleO/Heat but the concerns are really universal in my cloud operations experience. Read more.

Vagrant: Feature Preview: Docker-Based Development Environments
Vagrant 1.6 comes with a new built-in provider: Docker. The Docker provider allows Vagrant to manage development environments that run within containers, rather than virtual machines. This works without any additional software required on Linux, Mac OS X, and Windows. Read more.

Vagrant: Feature Preview: Windows Guests
Vagrant 1.6 will add a monumental feature to Vagrant: full Windows guest support. The ability of Vagrant to manage Windows environments just as easy as Linux environments has been requested for years and the time for complete, official support has come. Read more.

OpenStack:
Boden Russel: KVM and Docker LXC Benchmarking with OpenStack
Linux containers (LXCs) are rapidly becoming the new "unit of deployment" changing how we develop, package, deploy and manage applications at all scales (from test / dev to production service ready environments). This application life cycle transformation also enables fluidity to once frictional use cases in a traditional hypervisor Virtual Machine (VM) environment. For example, developing applications in virtual environments and seamlessly "migrating" to bare metal for production. Not only do containers simplify the workflow and life cycle of application development / deployment, but they also provide performance and density benefits which cannot be overlooked. Read more.

Cloudscaling: The 6 Requirements of Enterprise-grade OpenStack, Part 1
OpenStack is an amazing foundation for building an enterprise-grade private cloud. The great OpenStack promise is to be the cloud operating system kernel of a new generation. Unfortunately, OpenStack is not a complete cloud operating system, and while it might become one over time, it’s probably best to look at OpenStack as a kernel, not an OS. Read more.

Cloudscaling: The 6 Requirements of Enterprise-grade OpenStack, Part 2
In part 1 of this series earlier this week, I introduced The 6 Requirements of Enterprise-grade OpenStack. Today, I’m going to dig into the next two major requirements: Open Architectures and Hybrid Cloud Interoperability. Read more.http://www.cloudscaling.com/blog/openstack/the-6-requirements-of-enterprise-grade-openstack-part-2/

Cloud Architect Musings: What it means to be in the OpenStack community
Earlier this year, I had the privilege of giving two talks at the CloudStack Collaboration Conference in Denver, Colorado. Neither talk was specifically about OpenStack; the first was on navigating the differences between legacy and cloud native applications while the second was on getting involved with and growing out technical communities. While the response within both the CloudStack and OpenStack communities were overwhelmingly positive, I did receive questions from some who wondered what an OpenStack Evangelist from Rackspace was doing speaking at a CloudStack conference and potentially helping to build up a rival community? Read more.

ICCLab: Getting Started with OpenShift and OpenStack
In Mobile Cloud Networking (MCN) we rely heavily on OpenStack, OpenShift and of course Automation. So that developers can get working fast with their own local infrastructure, we’ve spent time setting up an automated workflow, using Vagrant and puppet to setup both OpenStack and OpenShift. Read more.

ICCLab: Managing hosts in a running OpenStack environment
How does one remove a faulty/un/re-provisioned physical machine from the list of managed physical nodes in OpenStack nova? Recently we had to remove a compute node in our cluster for management reasons (read, it went dead on us). But nova perpetually maintains the host entry hoping at some point in time, it will come back online and start reporting its willingness to host new jobs. Read more.

Nathan Kinder: Secure messaging with Kite
OpenStack uses message queues for communication between services. The messages sent through the message queues are used for command and control operations as well as notifications. For example, nova-scheduler uses the message queues to control instances on nova-compute nodes. Read more.

Opensource.com: Five new, excellent OpenStack tutorials
Launching a private cloud on open source software might seem like a daunting task, but fortunately, the OpenStack community is working hard to provide resources and tutorials to make it easier for people of all skill levels. In addition to the official documentation, and books like the one we profiled from O'Reilly, there are several other great howtos out there to help you get through whatever your next step might be. Read more.

Rackspace: Project Solum Goes Full Steam Ahead To Milestone 1
After less than six months in development, Project Solum has accomplished the first development milestone (Milestone 1). This important event allows deployment of code from Github via Heat to generate a running app deployed to Docker containers using a generalized (Heroku) build pack for the app stack. Read more.

Ravello: Multi-node OpenStack RDO IceHouse on AWS/EC2 and Google
OpenStack is awesome. But, as anybody who has played with it will tell you, its installation procedure can be daunting. Maybe you’ve always wanted to play with and never found the time? Or maybe you did install it, but you had to spend days scrounging for suitable hardware? Or maybe you’re an expert, but you have no way to quickly spin up and down entirely new installs? If the answer is yes to any of these, then read on. Read more.

The OpenStack Blog: Announcing the O’Reilly OpenStack Operations Guide
Er, what’s this? An O’Reilly OpenStack Operations Guide offered side-by-side with the continuously-published OpenStack Operations Guide? Yes, your eyes do not deceive you, O’Reilly has completed the production of the OpenStack Operations Guide: Set Up and Manage Your OpenStack Cloud. You can get your bits-n-bytes copy at http://docs.openstack.org/ops/ or order a dead-tree version on the O’Reilly site. Read more.

The OpenStack Blog: Open Mic Spotlight: Lucas Gomes
An interview with Lucas Homes, a core developer of OpenStack Ironic project. Read more.

Dell TechCenter Rockstar Interview #3: Gina Rosenthal

$
0
0

In this interview series we will introduce IT Professionals selected for this year’s Dell TechCenter (DTC) Rockstar Program. This program recognizes independent experts, Dell customers and employees for their significant positive impact on Dell TechCenter, blogs and social media when discussing Dell. You can find a list of all our Rockstars here. Our third interviewee is Gina Rosenthal.


Dell TechCenter: What is your domain of expertise?

Gina Rosenthal: I come from a *nix/storage background. Currently I’m in Dell’s Data Protection organization, I’m the product marketing manager for Dell AppAssure backup and recovery software.

Q: What are the most exciting trends in your area?

A: Well, most people will tell you backups are boring, and I guess to an extent that’s true. But restoring data is NOT boring at all! One thing I’m interested in is how underlying systems are changing. Right now I’m trying to wrap my mind around Microsoft CSV (cluster shared volume) and the structure of how reads and writes actually work. It’s really interesting to see this architecture, and to understand how you have to now architect applications and management applications like the one I support.  I think our area of the industry really needs to get back to basics – mostly because the basics are changing!

If you think you can just slap on some backup software, you may have unexpected results if – no when -you have to restore.  Just keeping up with how the underlying architectures are changing is exciting.

Q: Can you point us to resources you find particularly valuable?

A: The Dell AppAssure community on the Dell TechCenter is amazing. They moved over to Dell TechCenter from the original AppAssure forums. They have been with the product a while, they ask great questions and help each other. Just great people! We’ll be restarting Backup.U in a couple of weeks as well. We developed the program with independent analyst Greg Schulz (Twitter: @stoageio), and it consists of two live events each month (a webinar and a  Google+ Hangout) as well as additional links and content on one backup topic. We started out talking about the basics of backups, if you want to check out the recordings and collateral go here.  When we pick up the series again (hopefully next month) we’ll be talking about your data protection toolbox – so talking about data footprint reduction, archiving, tiering, replication, and management. It’s a good time, it’s completely vendor neutral – we don’t talk about Dell products at all! We are just trying to get back to basics, and help each other learn about the craft of backups, and trying to keep up with all those changes I talked about earlier.

I’ve been using Twitter to find where to look for detailed tech info on CSV. So I can’t forget about that!

Q: What are the most cutting edge Dell products – and why?

A: This is a hard one – there are lots of products in the software portfolio I’d like to know more about. Dell AppAssure is cutting edge in that we are block based backup and recovery. Every time I go to an event I hear more about how some of our software products work – they can do things I would have died to have when I was a sysadmin! And I haven’t had time to dig into how Enstratius helps you manage cloud environments.

Q: Gina, thank you so much!


22% of you are going to experience a data breach. How prepared are you?

$
0
0

If I were to tell you that there is a 22% likelihood that your employer will experience a data breach involving at least 10,000 records, and that data breach will cost your employer $3.5 million, and result in a loss of 3% of your employer’s customers worth about $3.3 million in lost business, would I have your attention?

Those are real numbers – real business consequences – published this week in the annual Ponemon Institute 2014 Cost of Data Breach Study. Let me take a few minutes and sum up this year’s report:

  • The average total cost of a data breach for companies participating in the annual research report went up 15% to $3.5 million.
  • The average cost paid for each lost or stolen record during the year increased 9% to $145 (US).
  • The probability of a material data breach involving at least 10,000 records is more than 22%.
  • Between malicious or criminal attacks, system glitches and human errors, malicious attacks had the higher number of breaches attributed at 42% and the highest cost.
  • In the US, firms run the risk of losing 3.3% of their customers to churn as a result of a data breach, resulting in $3.3 million in lost business.

It has been an interesting few weeks for IT security professionals. Between new and widespread vulnerabilities and the annual publications of data breach reports from Verizon and the Ponemon Institute, you’re likely drowning from either fire drills or data. But take the time and perform the annual gut-check of your enterprise security plan. Answer these questions:

  1. How are you currently granting users and administrators only the access rights required for their jobs? Are you able to base those rights on policies established by those that know, namely, the business and not IT?
  2. How effective is your perimeter defense? Can it scan every byte of every packet for the deepest level of network protection against threats to data? How are you enabling the secure access to that data and the applications where that data resides?
  3. How well are you able to manage or quarantine infected endpoints to help prevent them from infecting the rest of the network and impacting data?
  4. Does your data protection solution allow you to protect data wherever it resides: on devices, external media and in public cloud storage?
  5. Do you currently have an incident response plan in place? How do you plan to scale up or down based on the threats you are seeing?

Consider the costs and negative impacts of a data breach to sharpen your remediation so you don’t become another number.

Announcing the availability of the OpenManage Integration for VMware vCenter version 2.1.

$
0
0

Blog post written by Damon Earley, Product Marketing and Manoj Gujarathi, Sr. Engineering Manager at Dell.

On behalf of the entire OpenManage Integration for VMware vCenter (OMIVV) team, we are excited to announce the availability of version 2.1 of this award winning product!  With version 2.1, we continued to listen to your feedback and added enhancements while addressing open issues. 

The key enhancement added to version 2.1 is the agent-free enablement for blade chassis enclosure within the vSphere Web Client.  This release provides a virtualization administrator information on Dell chassis enclosure hosting PowerEdge blade servers for easier management and better customization of alerts and actions.  

Support for Dell M1000e and Dell PowerEdge VRTX chassis enclosure for:

  • Discovery of the chassis enclosure inside vCenter console, with automatic association of blades that are part of that chassis
  • Inventory support for the chassis components including IP, service tag, model, slot information, and more
  • Monitoring and Alerting of the health of the chassis based on status of each component integrating with Dell Chassis Management Controller (CMC)
  • Health of overall chassis, along with granular display of the health of each component within

 Version 2.1 also adds support for vSphere 5.0 U3 and vSphere 5.1 U2, along with bug fixes.

For those who would like to learn more about entire product, please use the OpenManage Integration for VMware vCenter community page, which is our main portal for information updates.  There are many videos, whitepapers, and links related to how to make the most of OMIVV.

To use this version, you can deploy a new OVF or use the RPM update mechanism to upgrade from a previous version.  For customers upgrading from v2.0, no new license file is required.

Please view additional details of the release in the OpenManage Integration 2.1 documentation, posted here.  If you’re interested in giving the product a try, a 5 host, 90 day evaluation version is available here!

Dell Security Delivers New Content Filtering Client for Mobile Users

$
0
0

Technology is a great thing. It’s given us the ability to surf the internet from pretty much anywhere at any time using our device of choice – a PC, a laptop, a tablet, even the smartphone that we take with us everywhere. This freedom to surf brings up the question of web content control. Sure, if it’s my personal device and I’m connected to the internet through my provider I don’t want anyone telling me what I can and can’t look up. But what if the device was issued by my company? Or, what happens when a child is given a tablet by the school to take home and use? Should there be some form of built-in control over the web content that the device can be used to access?

If you’re an IT administrator for a company, a school or any another type of organization, the answer is yes. Why? Anytime you hand users a device to access the internet you put your network, your organization and in some cases even the individual at risk. Now, rarely do you find a company or school that provides an internet-enabled device without some web content controls in place, especially when the device sits behind the network firewall. Doing so would open the organization to a host of harmful sites such as those that are known to distribute malware, those that expose children to inappropriate content and others that drain network bandwidth.

But what happens when that same device is used outside the firewall perimeter? You could force all internet traffic from the roaming device back through the firewall so that it adheres to the internal web filtering policy, but that might constrain valuable network bandwidth. Dell SonicWALL has another option, the Content Filtering Client. Available for Windows and Mac OS devices, the Content Filtering Client complements Dell SonicWALL Content Filtering Service (CFS) by extending enforcement of internal web use policies to block objectionable and unproductive Internet content for devices located outside the firewall perimeter. Ideal for schools, businesses and government agencies, the client enforces security and productivity policies whenever the device connects to the internet regardless of where the connection is established.  Deployment and provisioning are simplified through a Dell SonicWALL firewall and the client is managed and monitored through a policy and reporting engine.  And, if the client is out of date, the device is prevented from accessing the Internet until remediation steps are taken to achieve compliance.

We have had a great response from our customers and partners with the recent product release:

"I have to say that I’m really pleased with our decision to go with Dell SonicWALL ‘Content Filtering Client’—good call."Brian Dentler, Network Administrator, NorthPointe Christian Schools

For organizations looking to protect students from harmful web-based content, increase employee productivity or meet compliance regulations, Dell SonicWALL Content filtering solutions offer the ideal combination of “inside/outside” policy enforcement. Dell SonicWALL CFS provides the granular tools to help you create and enforce the web use policies you require for users behind your firewall. Dell SonicWALL Content Filtering Client extends that same protection and productivity enforcement to users outside the firewall perimeter, ensuring roaming devices also adhere to the policies you set regardless of where the connection is established. Kevin Downham, IT Manager at MacDermid, Inc.had this to say about the new Dell SonicWALL Content Filtering Client: "This ‘Content Filtering Client’ product is working well. No firewall crashes. No clients complaining.I love this product as you know and want to push it to the whole of my company"I think the product gives network admins such as myself piece of mind for all mobile users that the Internet is being used for its correct purpose and goes a long way in protecting my companies hardware and users. 

“I reiterate I'm pushing hard that other network admins within my company are aware if this product and it becomes a piece of software that is company standard."


Dell Released updated version of PowerEdge VRTX providing “Fault Tolerant Shared PERC 8” capability

$
0
0

Authors : Kiran Poluri, Kiran Devarapalli, Krishnaprasad K from Dell Hypervisor Engineering team

Dell recently announced an updated configuration release of PowerEdge VRTX which provides the fault tolerant feature for Shared PERC8. This blog focuses on the VMware ESXi support for PowerEdge VRTX supporting "Fault Tolerant Shared PERC 8" feature.

With this release, The PowerEdge VRTX system is currently available in two configurations of the Shared PowerEdge RAID Controller (PERC) 8 card:

    • Single Shared PERC 8 Card Configuration
    • "Fault Tolerant Shared PERC 8” Card Configuration

For more details on the configuration details, refer to the ‘Overview’ section in PowerEdge VRTX User Guide available at Dell support site. 

What Is New In This Release?

 This release of Dell PowerEdge VRTX supports:

    • Fault-tolerant (Active/Passive) mode for shared storage “Shared PERC 8”. Refer to “The Controller Failover Feature” section in the user guide.
    • M820 Server Platform support.

VMware ESXi support

At this point, VMware ESXi 5.1 Update 1, ESXi 5.1 Update 2 and ESXi 5.5 are supported for this version of VRTX release. 

NOTE: VMware ESXi5.5 Update 1 support for PowerEdge VRTX is targeted for June release which includes “Fault Tolerant Shared PERC 8” support 

The specific details of ESXi version support and the driver version details are outlined below.

ESXi5.1U1:

Dell Customized ESXi 5.1U1 A03 Image carries Shared PERC8 Driver Version megaraid-sas- 06.801.52.00 (Single Controller). For Customers who want to upgrade to Dual Controller in VRTX with ESXi5.1U1, need to upgrade Driver version to megaraid-sas- 06.802.71.00. This driver can be downloaded from VMware site.For VRTX to work in Fault Tolerant mode(Dual Controller), FW stack(CMC, Main Board FW and PERC FW) has to upgraded to latest releases available at support.dell.com.

ESXi5.1U2

Dell Customized ESXi 5.1 U2 A01  Image carries Shared PERC8 Driver Version megaraid-sas- 06.802.71.00 (Dual Controllers). 

ESXi5.5

Dell Customized ESXi5.5 A04 Image carries Shared PERC8 Driver version megaraid_sas-06.803.52.00(Dual Controllers)

NOTE: Customers who are installing ESXi5.1U2 on VRTX single controller system, need to upgrade the FW stack of CMC, Mainboard and PERC8 to latest versions available at Dell support site.

How do I upgrade from existing PowerEdge VRTX to support Fault Tolerant Shared PERC 8 ?

Refer to the Setup guide posted at Dell support site and follow the procedure to upgrade from a single controller VRTX configuration to a fault tolerant Shared PowerEdge RAID Controller (PERC) 8 configuration.

Multipathing support in VMware ESXi for PowerEdge VRTX

Dell recommends to use “Most Recently Used” path policy in VMware ESXi to support multipath in PowerEdge VRTX. Refer to Multipath Support In VMWare section in PowerEdge VRTX User Guide for more details. 

Known Issues w.r.t VRTX and ESXi

Refer to "VMware vSphere 5 On Dell PowerEdge Systems Release Notes" document available at Dell support site which lists the set of known issues for this release. 

References

ChangeBASE 6.1 is here!

$
0
0

I am very excited to be able to announce the new release of ChangeBASE 6.1 today.

We have further enhanced ChangeBASE 6.1 to enable you to utilize the powerful automation engine as the backbone to your automated application lifecycle managements processes.  

New features include: - 

  • Easy integration: Ability to integrate easier with 3rd party tools to collect more application related data for richer application reporting e.g. software usage to show impact of software in the estate
  • Tailored automation: Setup custom repackaging or virtualization task configurations for any packaging tool and automate how ChangeBASE converts applications.
  • Built in re-packaging: users can now repackage in ChangeBASE using MSI Studio. In ChangeBASE 6.1 users can automate the repackaging of their EXEs using MSI Studio rather than the proprietary ChangeBASE repackager (previously known as "Convert-It")
  • IE11 web application compatibility: inclusion of Internet Explorer 11 checks as well as improved details provided for all web application checks
  • Dependency Analysis:  Ability to now do dependency analysis for Java and .NET
  • Radia assessments: Ability to now import and assess native Radia packages
  • Drill down dashboards:Ability to drill down into dashboardcharts to reveal the relevant details

Existing customers can download the new version of ChangeBASE 6.1 from our support portal.

A free 30 day Trail can be downloaded from our website TODAY! www.software.dell.com/changebase

 

Viewing all 1001 articles
Browse latest View live


Latest Images