Quantcast
Channel: Dell TechCenter
Viewing all 1001 articles
Browse latest View live

If you are still backing up – you’re fired!

$
0
0

We take the concept of ‘backing up’ our data for granted. “Of course I back up, who does not protect their data.” Wrong! Backing up is the buying insurance equivalent of IT. Its passive, its reactive – that’s not todays IT, it has to be now, real time, active – data protection has to match business. Recovery and resiliency are the ‘now’ of data protection and are how you match data protection to how today’s IT operates.

So what does it mean to be active in data protection or to provide business resiliency? It involves three key concepts that transform the passive activity of backup into an active effort – making the recovery process intelligent, adding disaster recovery to backup and ensuring that you can recover every time.

Intelligent recovery

Blindly making copies of data files (with an associated index) once a day does not cut it in today’s world. Modern companies need to have duplicated copies to be sure, but those copies need to add value, not just add to the number of hard drives you own.

It starts with having multiple restore points at minute intervals during the day. Your recovery points should be granular enough to match the value of your business, say every 15 minutes – for everything, application, OS, data – the entire stack. Once you have these granular recovery points, then you need to be able to transform your backup to any format you need, no matter how it was protected originally – physical, virtual, cloud. Any to any recovery, migration and form transformation all need to be a part of the intelligent process.

Built in local and remote disaster recovery

Your data protection process needs to include local and remote disaster recovery. You need to be able to immediately standup a failed machine locally (in a virtual machine) so that SLA are preserved. Application owners don’t get SLA measured in days, why should backup admins?

To protect against site or geo failures, your backups need to be efficiently replicated offsite. Your data protection solution needs to be able to both do the optimized (de-duplicated) replication as well as have the ability to rapidly stand up a machine(s) at a remote site in the event of a failure.

Recovery assurance

You have to be sure you can recover – every time, period. An active data protection solution ensures this by mounting your backups and checking that they are recoverable. You can’t afford to be one of the crowd that finds out that they cannot restore a database or email when they need it most.

So as you think about your business goals moving forward, take a moment to consider how you can better match recovery and business resiliency to your business outcomes. It’s time to transform our thinking on backup and make it an active and positive part of IT. Thanks for reading and next time I will talk a little about how much it costs business to stay with old backup tool and processes. Spoiler alert, complacency costs a lot more than you think.

Eric Endebrock


Infographic — Solving the cloud client computing puzzle with Dell

$
0
0

The new mobile work style changes everything for your customers, your users and you. You want the ability to work anywhere, anytime, from any device. Who doesn’t? Sure, there’s direct pressure from your end users to enable mobility. All pressure aside, though, you likely want to do it. But delivering for every user everywhere on every device, maintaining your budget and all the while guaranteeing security is a puzzle of grand proportion: It’s massive, complex and ever-changing. They don’t call it the cloud for nothing. But this riddle is solvable. Dell’s cloud client computing brings to the table a combination of data center, clients, software and services to optimize desktop virtualization. And because it’s all customized to meet your specific needs, the puzzle pieces have finally begun to fit together.

 What is cloud client computing?

The older generation of tech-savvy consumers has witnessed the move from the mainframe to the PC to the laptop to the handheld — too many devices to name them all. Cloud client computing is running desktops from a centralized location in the data center or the cloud to enable various endpoints, from cloud clients to full feature laptops, tablets or even workstations. It all depends on your people’s use cases and needs. Cloud client computing is the new model for providing users with the right computing environment for their use cases while providing IT leaders the genuine reliability and security required. Cloud client computing is how we put together all the pieces of the puzzle: hardware, software and services.

When considering your approach to the virtual client, consider how your customers perceive you. Your content, your apps, your OS — They’ve got to work well and look good. Your people need to access their sometimes sophisticated and mountainous data and share it with others anywhere, anytime, on any device without compromising security. And your IT department must manage all the possibilities and keep costs down.

Many organizations are discovering that desktop virtualization done right enables this new paradigm. With a little help, you can use the cloud to improve the user experience and simultaneously improve your position in the marketplace.

Refer to the image below or download the full size infographic to learn more about how you can use Dell Cloud Client Computing to empower your end users.

Puzzle solved

With Dell Cloud Client Computing, you can navigate what was once a baffling maze. But many of us need help with the actual work. That’s where Dell comes in. With services and solutions that span end-to-end from the data center to the end-user device, Dell can help you regardless of where you are in your journey to the cloud and virtual desktops. Dell can manage everything for you or you can manage your virtual desktops on your own. It’s your choice. Select one or more of the following Dell offerings:

  • Flexible solutions tailored to fit your needs
  • Implementation and deployment services
  • Services to manage your environment after deployment
  • Support for key virtual desktop providers: Citrix, Microsoft, VMware and Dell vWorkspace

Dell offers a holistic, end-to-end solution that enables the new mobile work style, lowers costs, is manageable from a single point and is secure. And Dell is accountable when you need support. Rise above the competition by empowering your people with Dell and cloud client computing. Puzzle solved.

For more information visit Dell.com/cloudclientcomputing.


 

OpenStack Board of Directors Talks: Episode 11 with Tristan Goode, founder and CEO of Aptira

$
0
0

Learn firsthand about OpenStack, its challenges, opportunities and market adoption. My goal is to interview all 24 members of the OpenStack board, and I will post these talks sequentially at Dell TechCenter.


Q: Can you please introduce yourself?

Tristan Goode: I am the founder and CEO of a managed service OpenStack provider with presence in Australia and India. I am also the founder and lead organizer of the Australian OpenStack User Group and a supporter of the Indian OpenStack User Group, both personally and through my company, Aptira. I'm also very fortunate to be twice elected to the OpenStack Foundation Board of Directors as a representative of the individual membership. I’m also very proud to recently have become an OpenStack Ambassador.

Q: What are your duties as a board member?

A: I feel my primary duty is representing the users, operators and community builders in the OpenStack eco-system. There's good representation on the board for vendors and developers of OpenStack so I've always felt that there needs to be a place for those who are involved in the OpenStack community who don't write software or push a distribution.

Q: What is your view on the current Havanna release? What are the major enhancements compared to Grizzly?

A: The current release brings OpenStack a lot closer to being production ready out of the box. There are currently some areas of concern with regards to scalability and stability with some of the newer incubated projects but these are being worked upon and lot of effort is being put into making them as reliable as some of the older projects. Havana features a lot of improvements and bug fixes over the Grizzly release. There have been some long awaited enhancements to the Dashboard, like the ability change password as an example of a very simple and long needed function. Global clusters support for OpenStack Swift; API enhancements, improved support for Nova Cells and the addition of Docker are some major enhancements.

Q: What is your favourite feature in OpenStack Havana?

A: Usability improvements to the Dashboard and the better integration of Keystone and Neutron functionality to the Dashboard have to be up there.

Q: Is there anything that still needs to be improved in OpenStack or doesn't work well?

A: Despite the hard work by the documentation team, the nature of OpenStack’s release cycle makes it difficult for them to keep pace with all the changes and features introduced in the new releases. More needs to be done to keep the documentation current and I want the Foundation to really focus on how we do this. Greater attention needs to be paid with some of the newly incubated projects and OpenStack’s core design tenets. More care needs to be taken to ensure that some of the new features being introduced do not break the design tenets. Also, we don't want OpenStack to become a trojan horse for vendors so that means each project (Compute, Storage, Networking especially) should have a first-class and free option for systems administrators to implement.

Q: Let me go back to my previous question. Are there any projects in particular which you feel are not fully mature yet?

A: The Neutron Project has some scalability issues that I am a bit concerned about. While there are a lot of contributors to the Neutron project, it seems that the large chunk of the development is done on vendor specific drivers and interfaces. Currently a Neutron solution deployed to scale requires a 3rd party proprietary software, an open source solution needs to be bolstered to avoid this.

Q: How do you envision OpenStack two or three years from now?

A: As the default cloud operating system. I have a lot of conversations with enterprises that have no other choice than to “glue” various components of their infrastructure into a cohesive state using OpenStack – right across their entire business. OpenStack is the unique opportunity for nations and businesses around the world to develop ground-up cloud solutions without proprietary or data sovereignty constraints.

Q: What is your view on Linux as a road model for OpenStack?

A: The important job of the foundation is to define core very, very well because if we don't, it's going to lead to a fractured ecosystem. Taking Linux as an example, we need to control our OpenStack “kernel” with a rich environment and a whole bunch of different distributions around it. As an example, we need to control the use of the word certification and similar wording used with OpenStack. In many cases presently, some organisations offering training have so called “certifications” for largely proprietary flavours of OpenStack – and this has to be controlled so the consumer knows what they are getting.

Q: What are the biggest challenges ahead of the board?

A: Getting better User and Operator feedback to the development teams is critical. With the proliferation of new projects there are concerns being raised about the lack of cohesiveness between the projects and the perception that the developers are not heeding real world deployment feedback and are racing to add features and new projects. Essentially, we all need to work together and constantly look at the big picture view of the project so OpenStack is stable, scalable and secure, even that means decelerating features.

Q: Thank you for your time.

A: Thank you.

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #64

$
0
0

"The Elway Gazette"

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (flo@datacenter-flo.de) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

Featured Posts of the Week!

KB2913695–OffloadWrite Does PrepareForCriticalIo For Whole VHD On WS2012 Or WS2012 R2 Hyper-V by Aidan Finn

KB2913461 – Hyper-V Live Storage Migration Fails When Moving Files to WS2012 CSV by Aidan Finn

Useful Tools for Hyper-V Administrators by Aidan Finn

Hyper-V did not find virtual machines to import from location “*”… migration from 2K8R2 to WS2012R2 by Romeo Mlinar

Azure 

Windows Azure Hyper-V Recovery Manager verfügbar in German by Daniel Neumann

Events

THE BIRTH OF THE CLOUD AND DATACENTER MANAGEMENT GROUP by Thomas Maurer

Hyper-V

Hyper-V Recovery Manager – Teil 2: Planung und Voraussetzungen in German by Benedict Berger

Cannot start imported Hyper-V VM: The Security ID structure is invalid (0x80070539) by Lai Yoong Seng

Where Does Storage QoS Live In Windows Server 2012 R2 Hyper-V by Didier van Hoye

KB2913695–OffloadWrite Does PrepareForCriticalIo For Whole VHD On WS2012 Or WS2012 R2 Hyper-V by Aidan Finn

KB2913461 – Hyper-V Live Storage Migration Fails When Moving Files to WS2012 CSV by Aidan Finn

Useful Tools for Hyper-V Administrators by Aidan Finn

KB2923885 – WS2012 VMs With SR-IOV on WS2012 R2 Hyper-V Incorrectly Say Integration Components Require Upgrade by Aidan Finn

A Kit/Parts List For A WS2012 R2 Hyper-V Cluster With DataOn SMB 3.0 Storage by Aidan Finn

Hyper-V and Cluster Update List for Windows Server 2012 R2 by Lai Yoong Seng

HYPER-V RECOVERY MANAGER (HRM) FAQ by Thomas Maurer

The Purpose of a Hyper-V Failover Cluster by Aidan Finn

Hyper-V did not find virtual machines to import from location “*”… migration from 2K8R2 to WS2012R2 by Romeo Mlinar

How To Monitor Storage QoS Minimum IOPS & Identify VM & The Virtual Hard Disk In Windows Server 2012 R2 Hyper-V by Didier van Hoye 

Office 365

How to change a Microsoft Account for two factor authentication with sending the notification code to another email address by Toni Pohl

PowerShell

#PSTip Using Start-BitsTransfer and waiting for download to complete by Ravikanth Chaganti

Managing Kemp LoadMaster using REST API and PowerShell by  Ravikanth Chaganti

UPDATED POWERSHELL SCRIPT PROFILER by Jeffery Hicks

Deploying a Desired State Configuration Web Host using PowerShell by Damian Flynn

Get-InstalledUpdates – List all installed updates and hotfixes by Jeff Wouters

Announcing PowerShell Information Server (PSIS) by Ravikanth Chaganti

Powershell Deployment Toolkit: Introducing Variables.XML by Damian Flynn

Sharepoint 

***NOW AVAILABLE*** Pluralsight's SharePoint 2013 Workflow - Advanced Topics by Andrew Connell

System Center Configuration Manager

How to find ConfigMgr Collection membership of client via Powershell? by David O'Brien

Reports not working after upgrade to ConfigMgr 2012 R2 by David O'Brien

System Center Dataprotection Manager

Can Use Storsimple LUN as DPM Storage Pool? by Lai Yoong Seng

System Center Operations Manager

Which servers have an Operations Manager client? by Jeff Wouters

NOTES FROM SYSTEM CENTER BATTLEFIELD: MONITORING GUEST CLUSTERS WITH SCOM by Stanislav Zhelyazkov

System Center Service Manager

ITNETX CIRESON ADVANCED SEND MAIL FOR SERVICE MANAGER 2012 AND 2012 R2 by Thomas Maurer

System Center Virtual Machine Manager

SCVMM 2012 R2 and deploying VM´s with altered name on vhdx by Niklas Akerlund

DYNAMIC IP ADDRESS LEARNING NOT WORKING ON NEWLY DEPLOYED VMS IN SCVMM 2012 R2 by Stanislav Zhelyazkov

System Center 2012 R2 Virtual Machine Manager – Update Baseline neue Updates hinzufügen in German by Daniel Neumann

System Center 2012 R2 Virtual Machine Manager – Ändern des BITS Ports in German by Daniel Neumann

VMM Configuration Analyzer for 2012 R2 by Niklas Akerlund

VLAN Tagging in System Center 2012 R2 Virtual Machine Manager #SCVMM #sysctr #SDN by James van den Berg

KB2901237–Hyper-V Replica Is Created Unexpectedly After Restarting VMMS On Replica WS2012 Host by Aidan Finn

SCVMM VM property dialog crash console with missing vhd(x) by Niklas Akerlund

Virtual Hard Disk Sharing ( shared VHDX ) Storage Pool Usage in Failover Clustering Windows Server 2012 R2 by Robert Smit

64 Node Sharing Virtual Hard Disk (shared VHDX) in Failover Clustering Windows Server 2012 R2 by Robert Smit

Windows Client

What´s New in Windows 8.1 Security by Martina Grom

Suche in Windows 8 findet nichts in PDFs? in German by Nils Kaczenski

Right-Click On Start And Windows+X Fail To Open Power User Menu On Windows 8.X by Aidan Finn

Windows Server

Remove an iSCSI Target on Windows Server 2012 R2 by Romeo Mlinar

CSV Cache Is Not Used With Heapmap-Tracked Tiered Storage Spaces by Aidan Finn

Windows Server 2012 R2 Scale-Out File Server Clusters thoughts by Robert Smit

KB2914974–Cluster Validation Wizard Might Not Discover All LUNs On WS2012 Or WS2012 R2 Failover Cluster by Aidan Finn

KB2916993 – Stop Error 0x9E In WS2012 Or Windows 8 by Aidan Finn

Cleanup Windows.old after upgrading to Windows Server 2012 R2 #winserv by Robert Smit 

Tools 

VSS Diagnostic -VSSDiag Tool by Lai Yoong Seng

Dell Driver CAB files for Enterprise Client OS Deployment now available for Windows 8.1

$
0
0

The Dell Client System Deployment CAB files offer new levels of ease and flexibility for creating and deploying customized OS images on Dell Enterprise Client systems (Latitude™, Optiplex™, Precision™, XPS and Tablet systems) with deployment tools like Microsoft System Center Configuration Manager (ConfigMgr/SCCM) or Microsoft Deployment Toolkit (MDT).

 

Windows 8.1 Operating system deployment would require any of the following deployment tool configuration:

  1.         System Center 2012 R2 Configuration Manager
  2.         System Center 2012 Configuration Manager Service Pack 1 with Cumulative Update 3 - Details for enabling Windows 8.1 deployment using  SCCM 2012 SP1 CU3 is available here.
  3.         Microsoft Deployment Toolkit 2013
  4.         Other Systems Management Console that can consume .INF drivers with it's OS Deployment process (Altiris, LanDesk, etc.)

 

Dell provides WinPE 5.0 cab which includes network and mass storage drivers in 32 and 64 bit versions for all supported platforms for  Windows 8.1 operating system deployment.  Details of importing WinPE image into Configuration Manager is available here.

 

Windows 8.1 Dell driver cabs for Dell Enterprise class systems (Latitude, Optiplex, Precision, XPS, Tablet)  are available. Dell Client Integration Pack (DCIP) 3.1 can be used for importing the Windows 8.1 driver cabs into SCCM.

 

The truth about AppAssure

$
0
0

I did my personal performance review over the weekend. It’s always important to look back at what you’ve been able to accomplish, especially when you work in the fast-paced world of enterprise software. Looking back, we’ve accomplished a lot. We combined 4 products into a Data Protection portfolio. We combined several communication vehicles – social media accounts, websites, blogs, SalesForce instances, marketing automation programs…you name it we had THREE (Dell, Quest, AppAssure). We worked to combine processes and IT stacks of three companies. It was rough sometimes, but we’re just about through it.

If we’re connected on LinkedIn, you know I finally got around to updating my title. I’ve been the Product Marketing Manager for AppAssure for the last six months. A lot of my time has been spent on the activities I mentioned above, plus working on AppAssure’s reputation. One of the things we did to address this was invite one of the founders of AppAssure, now Dell’s Data Protection GM, to speak to the Tech Field Day 9 candidates back in June. He talked about what actually happened when we first released AppAssure 5.0, and what we’ve done to fix it:

(Please visit the site to view this video)

Still, I feel like I’ve been slacking off on one of the things I wanted to bring to this role – writing technical things in social spaces about my product.  I’ve come across things some things marketers from other companies are saying about AppAssure, and I think it’s time to just deal with this issue head on.

Untruths you may hear about AppAssure:

  • AppAssure is a software only product.NOT TRUE. 
    AppAssure is absolutely available in an appliance version, it is called DL4000. Come on y’all, AppAssure is a Dell product now. Why wouldn’t we put the power of our PowerVault servers behind AppAssure, to make protecting data in your environment as quick and easy for you as possible?
  • AppAssure’s lowest RPO (time when a backup is taken) is one hour. NOT TRUE.
    Granularity for backups (and therefor recovery) using AppAssure is 5 minutes.
  • AppAssure does not have capacity based licensing. NOT TRUE. 
    Currently capacity licensing is available with the DL4000s.
  • AppAssure cannot do File Level Recovery. NOT TRUE.
  • AppAssure uses a tool called “local mount utility” to restore files. This downloadable application lets you mount a recovery point from the AppAssure Core (AppAssure’s central repository – where all the snapshots live) on any protected machine. Once the recovery point is mounted, its file system can be explored. Using the tool you can recover data all the way down to an individual file.
  • AppAssure has only limited bare metal support to Windows and it requires an additional license. NOT TRUE.
    AppAssure supports bare metal recover for Windows and Linux, and is included at no additional cost. 
  • AppAssure only backs up a single server without understanding the relational connection between the data and the application. NOT TRUE.
    AppAssure is application aware, even with distributed applications. It supports CCR, DAG, SCC, and Hyper-V CSV.
  • AppAssure has a lower ROI and TCO due to higher operational expenses of putting together, managing, and monitoring servers, storage, networks, operating systems and data protection software. NOT TRUE.
    The DL 4000appliance is a self-contained server sized perfectly for AppAssure, it includes two licensed virtual machines that can be used for DR, and you can even purchase additional storage if you need it. Or if you are more comfortable managing all of the elements of your environment, you can purchase the software on its own. We leave the choice up to you.

What else have you heard about AppAssure? Ask me in the comments, or just check out this video we released when AppAssure 5.3 was released. Let's open up this conversation, because we have some cool stuff share coming soon!

 (Please visit the site to view this video)

edited to fix one fact. Sorry y'all!

Software Defined Data Centers and its Implications for Data Protection

$
0
0

To understand the implications of data protection and disaster recovery in a “software defined data center”, I need to first make sure we are on the same page as to what that term even means.

Defining SDDC (software defined data center)

Essentially, what the industry is trying to do is separate the control and data paths from the various stacks that make up a data center, such as networking, storage, and compute.  The control path is the “how” data moves and the data path is the physical movement (or storage) of the data.  We don’t want both of these pieces of logic to be in the same stack, let alone tied to a particular firmware/hardware.

Why do we want to do this?  Well, when the data and control paths are bound to a physical piece of hardware, the data center and IT staff is less agile.  To make any kind of changes in the infrastructure, it always involves physical changes, such as cabling changes, hard programming of the network switches or changing policies in each of the storage arrays.  The IT staff has to log into the hardware and manually change the policies and settings, often differently for each networking and storage and compute equipment they have.

In an ideal world, the data center is fully automated and adaptive to ever changing business workflow needs… dynamically reconfiguring the flow of information in the switches, dynamically deciding which storage array the data is destined for and how it is to be protected.

The underlying hardware should not pose a factor into how these decisions are made and most importantly, humans (IT) should not be pulled into making these changes on an on going basis.

History of how the industry started moving towards SDDC

So the first steps in the industry started out with dynamically changing network profiles.  While not getting into too many details, this concept is known as software defined networking or SDN.  It was predominantly driven by openflow.  Here, the switches maintain the data path, but it’s control information (such as which port a certain packet needs to go out of, or if it should be dropped) is determined by a separate control path running typically on a server.  There you have it, now the IT staff - if they have an all openflow switch infrastructure – can determine how to route all the data center traffic just from one centralized server console.

With storage, software defined storage mostly implies taking the software features such as virtual volumes, quotas, replication requirements, etc out of the arrays and into a central server while treating the disks, as just a bunch of disks.

Storage complicates the issue further by the lack of any good clustered file system that can sit on top of dumb commodity storage arrays.  In a nutshell, SDS is still a lot of good thought but less action.

How can data protection be part of the SDDC

Well, what does all this mean to data protection?   The first thing I realize is that there is no unified “SDDC stack” yet.  The concept exists and is being evolved, but there is no concrete standard (such as openflow) as to what SDDC needs to look like, and specifically what SDS needs to look like (SMI-S did not get very far).

The first thing data protection products need to do is be able to handle the SDN and SDS software stacks as a specific application to be snapped, backed up and replicated.  Just like we treat databases as special software stacks.  That is, we need to snap the control stack (SDN or SDS) and its data stacks together to assure the equivalence of “application consistency” that we offer today.  Let me dig into that a little bit more… today, there is a difference between crash consistent and application consistent backups.  Crash consistency implies the data is intact, but the application may not act exactly like it was behaving when the disaster happened.  So we now implement various strategies within the applications to enforce application consistent snapshots.  In a software defined data center world, we need to extend this to reach into the control stacks of the arrays and the network controllers to make sure that the entire data center can be brought backup online exactly how it was before it went down.

Secondly, we need to integrate the data protection logic, such as AppAssure into the SDN and SDS stacks directly.  We can’t be a separate stack or implemented in a separate layer.  The whole orchestration of the data center needs to be orchestrated in one console, by one set of policies and perhaps by one congealed software stack.   Let me give you an example here… As an IT administrator I want to be able to set a policy in my SDDC console that says “At 2AM add 10GB extra bandwidth to XYZ Oracle application just after snapshotting the entire stack.  Replicate the stack to datacenter B and run data center B in standby mode with 5GB bandwidth”

One day we are going to get there, and pieces are falling into place.  We just need to be in sync and at pace with the rest of the industry.

How do you see data protection fitting into a SDDC? Let’s discuss in the comments.

Why backup and recovery is being forced to change

$
0
0

In the last decade there has been an enormous amount of change that has broken traditional backup and recovery techniques. From the rapid growth of data to the ubiquitous adoption of virtualization and even the use of cloud technologies, the landscape of protecting customer’s data is never something developers could have thought about a generation ago. The days when customers would stick with their tried and true backup software are over and customers are not just looking for a new vendor, but are re-architecting their entire data protection strategy.

While many changes seem obvious, there are many variables driving change in the way customers protect and manage their data center. Below are a few of my thoughts that are driving this change.

  1. Data growth– The ever growing backup window has pushed IT managers to look at solutions that provide real time or near real time protection while minimizing network traffic
  2. 24 hour Operations– Traditionally backup jobs were run at night and on weekends when network traffic was minimal. Many IT departments must now deal with applications that run 24/7.
  3. Virtualization– With applications now running on virtual platforms the need to recover the application is just one step. Many times there may be a need to recover the virtual infrastructure or even migrate to a virtual or physical environment.
  4. Transition away from tape– While many customers continue to use tape as cost effective solution, a majority of IT managers now either augment with disk based and even cloud based storage. Utilizing random access capability of disk and cloud interfaces like S3 can provide some performance and cost tradeoffs that make sense. Additionally, technologies like compression and de-duplication have come a long way and play a major role in reducing the cost of using higher cost/higher performance disk drive solutions for backup and recovery storage.
  5. Required downtime – For many the amount of time to recover from a disaster or even a lost file has dramatically reduced and IT managers are looking for ways to recover applications and files in seconds instead of days.
  6. Too many tools– As the number of tools propagated for backup, recovery, and business continuity, IT managers were forced to deal with multiple tools and methods. Today there is a need to look at ways to consolidate and provide backup and recovery, archive, and even high availability functions in a consolidated way.

I actually really like what all these changes are doing for the backup and recovery tools of today. Backup and recovery products can now do things like having a virtual standby application that can be stood up when a primary application goes down. Snapshot and de-duplication technologies have allowed customers to keep close to real time backups that allow recovery points that can be accessed in seconds allowing them to use backup and recovery tools in new ways.

This year I will be blogging about many of these and other trends and changes in the backup and recovery space and sharing my thoughts. I welcome feedback and would love to get your perspective as well, please leave me a note in the comments.


Going broke – why it’s costing you more than you think to backup

$
0
0

In my last post I introduced the concept of “Active” backup and why it matters for modern IT. I wanted to follow-up with a discussion on the costs (direct and hidden) that complacency around your existing backup solution is having on your business. Depending on how you view backup, a typical response from IT is to simply, maybe mindlessly, renew what you already have every year when the maintenance contract comes up. This all-to-common action is costing you more than you think.

So what should you do? Step one is identifying and sizing this impact by looking at how you are matching backup to your business. A recent study by Computing Research helps put this in context and outlines the key areas that drive costs. Let’s take a look at two of the key traps customers fall into.

Using a single product that does not/cannot match an applications business criticality– This study indicates that roughly 43% of customers are in this boat. On the surface this seems like a great way to save money, but within this method lurks one of the largest hidden costs – slow recovery. Let’s face it, recovery is what we are after, not backup. This approach is designed to make IT’s life easier from an administrative standpoint, but doesn’t address the business need around rapid recovery (RPO/RTO) and the SLAs required by business critical applications. This approach can be penny wise – dollar foolish to the bottom line as the number of recoveries rise, and the time in which complete recovery is expected continues to decline – dramatically. You face serious costs in downtime (real dollars) and end-user/customer bad-will every time IT cannot deliver in the timeframe desired.

Using many products, but without control and/or optimization - this is the second most common approach to meeting business needs with 28% of respondents reporting that this was their method. On the positive side, this approach at a minimum better matches the needs of applications and business owners. They often see improvements in recovery and have tools that better meet their needs, but at the cost of complexity and operational inefficiency. Under this model there often exist ownership issues and process disconnects that hinder, rather than help. In fact, the top 4 issues cited from this approach are:

  1. Complex processes that slow recovery (recall recovery is what we are really after),
  2. Compatibility between products,
  3. Licensing,
  4. Support and training issues.

 It’s not that having different backup approaches or even interfaces is always bad, but if done without operational consideration this cure might be worse than the disease.

In case you were wondering – the third most common approach is to ignore protecting applications altogether (17% of respondents citing this as their choice). Needless to say this is fraught with more than just costs, but true business risk for failure, can you say “going out of business”?

So what is a company to do? For the close to 65% of customers who are not directly addressing application and business resiliency with a new approach, that’s the place to start. Stop robotically renewing your current backup ISV’s maintenance contract and treating backup as if its mindless infrastructure and let business resiliency be part of the value added IT service you provide – a core application in its own rite.

For those customers who have started matching backup to their business needs - congratulations, it’s a great first step. Your next move is to focus on operationally tying those products together so that they are not unnecessarily contributing to your overall storage growth and driving up costs in areas you did not expect (more on this in future blogs). In doing this you will also have the opportunity to ensure the ownership and processes necessary for rapid recovery are in place.

Where are you on this journey? Share your experiences in the comments section.

Eric Endebrock

Coming Soon: New TechCenter Solutions

$
0
0

Dell Software is Coming to TechCenter!

Hello all.  My name is Daryll Swager, and I'm the manager of the Customer Engagement team at Dell Software.  I manage a team of folks who build our software communities, run our social media program, and build our customer case studies.  I came to Dell from the Quest acquisition in late 2012.  This is my first post on TechCenter, and I can guarantee you it won't be my last.

After Quest was acquired and we became Dell Software, we started researching how to provide our community users with a best in class community experience, while at the same time integrating into Dell if possible.  At its core, Dell has a great set of communities for people to talk about technology.  TechCenter, for example, is a great place to get information about IT and Datacenter Management.  When I first visited this community, I was very impressed by the content breadth, and some of the articles/blogs were (and still are) very technical in nature.

Because of all of these things, and because we would love to help cultivate an even larger ecosystem of customers and tech geeks, Dell Software has made the decision to bring their solution communities to Dell TechCenter.

This is no small task.  We need to:

  • Look at our users and move over the ones who don't have accounts on this community
  • Get messages to the users whose accounts need special handling in order to move over
  • The forum and blog content has to come over from the old communities
  • We need to change the layout and organization of TechCenter to make room for our new communities as well

At lot of the work above is going on right now, behind the scenes.  It's going very well and we anticipate that by this time next month, all of the Software communities will be open.

In some cases (like the Virtualization or the Cloud communities), we're joining a community that already has participation and at least some level of activity.  Please welcome your new participants with open arms.  I am personally very excited to be bringing our IT crowd to this great community!

Look for more updates from me next week with additional details, such as:

  • What kinds of content we'll be launching
  • What communities we're most excited about
  • Some of the content creators and bloggers that we'll be bringing over to this community

In the meantime, feel free to reach out to me on LinkedIn or by commenting on this post.  I'm here to answer any questions folks might have about this move!

-Daryll

Technical Differences in the Various Data Optimization Techniques

$
0
0

With the vast amounts of data growth, data optimization technology needs to be in place and integrated with all tiers of storage – primary, archive and backup.  But there are the differences in dedupe and compression technologies and algorithms one can use, so what strategy do you use at each tier of storage? I’ll actually cover that part in my next post, but first - if you are wondering what the differences in dedupe and compression techniques were, let me list a few in this post:

Technologies to consider with Compression techniques:

Inline versus Post process strategies: Do you compress the data on ingest or after the data gets idle and already stored?  This problem actually applies to dedupe as well.  Thankfully, it usually makes sense to apply the two data optimization techniques together in a pipeline, with first performing dedupe and eliminating any redundant data altogether and then compressing the residual (truly novel data).

Statistical techniques versus dictionary compression approaches: Do you use code substitution algorithms or statistical compression techniques that learn the patterns of data over time to predict the next piece of information being stored?  To complicate things further, a lot of data today is already compressed in some form or another (take JPEG images for example).  So do you decode the data to further compress it with a superior algorithm?

How do you solve Random Access into compressed data?  If the data is compressed, how do you access a byte of information at logical offset X?  It is an issue because the data is now not physically at offset X since it has been compressed.  What sort of impact does accessing compressed data have for the tier in question?  Any data transformation technique (be is dedupe, compression or encryption) has some performance impact to data transfers.

How do I handle an already compressed data stream?  Like an image... traditional compression algorithms will not be able to compress an already compressed image.  You need content specific compression algorithms and those can be CPU expensive.  So is it worth applying that to the tier of storage in consideration?  To understand more about compression technology, take a look at Paq, an open source compression algorithm and suite started by one of our Data Protection Chief Scientists, Matt Mahoney.

Technologies to consider with Dedupe techniques:

What Chunk Size do I use? Dedupe works by chunking data and finding exact matches to the data and substituting it with pointers.  So, do I use fixed block chunk sizes, where data is chunked at pre determined physical boundaries or do I use variable sizes where the data is chunked at variable sized chunks so as to get better data matches?  See this article to understand the intricate details: http://virtualbill.wordpress.com/2011/02/24/fixed-block-vs-variable-block-deduplication-a-quick-primer/

How do I handle read back speeds?   This is an issue because when you dedupe data, essentially you are replacing sequential data with pointers to other data chunks.  These other data chunks will not be in sequential order when they were stored and all over the storage subsystem.  If you have spinning disks under the hood, well this is a huge problem since spinning disks don’t like to be randomly accessed.

There are many more issues like this but I wont get into the entire minutia – today.

The point behind this post is that you need to understand that there are differences in compression and dedupe technologies and a product needs to apply the correct strategy in each tier.  So what are the correct strategies?  Stay tuned for my next post on that!

 

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, Exchange, SystemCenter and more – #66

$
0
0

"The Elway Gazette"

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (flo@datacenter-flo.de) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

Featured Posts of the Week!

Troubleshooting Windows Azure Pack by Kristian Nese

Reverting the Forest & Domain Functional Levels in Window Server 2008 R2, 2012, 2012 R2 by Didier van Hoye

Integration Between SCVMM 2012 R2 and SCCM 2012 R2–Software Updates by Lai Yoong Seng

Using Share storage in HYPER-V through VMM 2012 R2 by Susantha Silva

Azure 

Windows Azure Pack with ADFS and Windows Azure Multi-Factor Authentication – Part 1 by Marc van Eijk

Windows Azure Pack: Windows Azure technology in your datacenter #Cloud #WAPack by James van den Berg

Troubleshooting Windows Azure Pack by Kristian Nese

Hyper-V

Videointerview mit Taylor Brown über seine drei Favoriten in WS2012 R2 Hyper-V in German by Carsten Rachfahl

FreeBSD Has Built-In Guest OS Support For Hyper-V – Please Note The Wording! by Aidan Finn

Windows Server 2012 or WS2012 R2 Hyper-V Cluster Requirements by Aidan Finn

Microsoft Capacity Planner for Hyper-V Replica #Hyperv #SCVMM by James van den Berg

5NINE CLOUD SECURITY FOR HYPER-V 4.0 by Thomas Maurer

NVGRE GATEWAY CLUSTER NOT DISCOVERED COMPLETELY BY THE MULTI-TENANT RRAS MP by Stanislav Zhelyazkov

WINDOWS SERVER 2012 R2 PRIVATE CLOUD VIRTUALIZATION AND STORAGE POSTER AND MINI-POSTERS by Thomas Maurer

Add new Windows Server 2012 R2 Hyper-V Node to the Cluster with SC2012R2 VMM #SCVMM #Hyperv by James van den Berg 

Office 365

Wie Sie mit Hilfe von IdFix Fehler im Active Directory finden und beheben in German by Martina Grom

PowerShell

Deploying a Desired State Configuration Web Host Using DSC by Damian Flynn

Use PowerShell and repadmin to get active directory replication status by Jeff Wouters

I WANT POWERSHELL 4.0 by Stanislav Zhelyazkov

SORT WINDOWS NETWORK ADAPTER BY PCI SLOT VIA POWERSHELL by Thomas Maurer

FRIDAY FUN WITH REST, REGEX AND REPLACEMENTS by Jeffery Hicks 

Sharepoint 

SharePoint Client Browser Tool in German by Toni Pohl

System Center Configuration Manager

Desired State Configuration Host Deployment: Local Configuration Manager by Damian Flynn

Multiple copy of Applications during ConfigMgr Task Sequence by David O'Brien

How to convert ConfigMgr Applications to Packages with Powershell by David O'Brien

System Center Virtual Machine Manager

KNOWN ISSUES FOR VIRTUAL MACHINE MANAGER IN SYSTEM CENTER 2012 R2 by Thomas Maurer

Integration Between SCVMM 2012 R2 and SCCM 2012 R2–Software Updates by Lai Yoong Seng

Using Share storage in HYPER-V through VMM 2012 R2 by Susantha Silva

SQL Server

QUICK LOOK AT THE NEW SQL SERVER 2012 ANALYSIS SERVICES MANAGEMENT PACK by Stanislav Zhelyazkov

Windows Client

Internet Explorer 11 Freezes With “Your Last Browsing Session Closed Unexpectedly” (Workaround) by Aidan Finn

Windows Server

Reverting the Forest & Domain Functional Levels in Window Server 2008 R2, 2012, 2012 R2 by Didier van Hoye

Group Policy Caching: sinnvoll oder sinnlos? in German by Matthias Wolf

How Microsoft Windows Build Team Replaced SANs with JBOD + Windows Server 2012 R2 by Aidan Finn

 

IT Pros, do you want to be a Dell TechCenter Rockstar? Apply by Feb. 20th!

$
0
0

The Dell TechCenter Rockstars are an elite group of participants who are recognized for their participation in Dell related conversations on Dell TechCenter, other tech communities and through social media.

 Each year we select the Rockstar group through an application process and need your help identifying these unique individuals. If you or someone you know should be a TechCenter Rockstar, fill out ourshort application before February 20th 2013 and return it to enterprise_techcenter@dell.com to be considered for our 2014 Dell TechCenter Rockstar program!

The benefits of the Dell TechCenter Rockstar Program

  • Official recognition on Dell TechCenter
  • Unique opportunities / experiences with Dell
  • Dell TechCenter gear
  • Invitations to applicable Dell events or conferences
  • Access to NDA briefings, early information on products
  • Increased engagement with Dell TechCenter team and Dell Engineers
  • networking opportunities with fellow DTC Rockstars

What is a Dell TechCenter Rockstar?

  • Rockstars participate in Dell programs such as
  • Dell beta programs
  • TechCenter chats
  • Guest blog posts on DTC
  • Wiki articles
  • Answering forum posts on Dell TechCenter
  • Dell product feedback or usability sessions
  • Advocating Dell online on 3rd party IT communities, blogs, social media, etc
  • Rockstars conduct business in an ethical way living up to Dell code of conduct

2014 Dell TechCenter Rockstar application

 How cool was the 2013 DTC Rockstar Program?

In addition to a private forum and direct connections with Dell and the Dell TechCenter team, DTC Rockstars had access to special early briefings on the VRTX launches, hands on with precision workstations, opportunities to present at Dell World and TechCenter usergroups, additional exposure for their own blog posts on TechCenter and more.  The year culminated in many of the Dell TechCenter Rockstars flying to Austin to participate in the Dell World 2013 conference and a Samsung sponsored go-carting excursion. 

Remember, we need applications turned in by February 20th, 2014, so don’t delay. We look forward to hearing from you, best of luck.

 

Compression and Dedupe Strategies for Online, Archival and Backup Tiers

$
0
0

In my last post, I discussed the differences in various dedupe and compression techniques. In this post, I'd like to talk about how a product should apply the correct technique in each data tier.  People have different opinions on this, so here are my take on the correct technologies at each tier:

 Online storage:

  • The nature of this storage is fast access.  Data is used all the time and the importance is on speed, not really space savings.  The data path directly impacts application performance, so don’t go too crazy on saving space. 
  • Best techniques: Inline dictionary based compression coupled with fixed block dedupe.  Fixed block dedupe requires less CPU processing to be done (the boundaries are pre determined and do not have to be computed in real time) and dictionary based compression techniques are nothing more than memory lookups.  There is almost no CPU time being utilized.  So these two are fast and save space for many applications such as virtual machine workloads (virtual desktops), sparse data bases, blogs and so on.

Archival storage:

  • Typically over here, the data is largely unstructured (just files) Data is seldom accessed, some latency can be tolerated – it is usually a human user accessing the data. 
  • Best Techniques: My vote here is to go with very large chunk sized variable dedupe (in a future blog post I’ll explain the details around the implications of chunk sizes in more details). 
    The more important topic here is that I would go with file specific compression techniques that understand the type of file it is dealing with and apply a file specific algorithm.  Additionally, I find that post process techniques work best here since dedupe and compression stages are more CPU intensive and you do not want to impact ingest speeds.

Backup storage:

  • The most important thing to keep in mind about backup targets (such as our DR4100 product) is that you want to minimize backup windows (how fast you can protect your data).  To do this, you need the backup target to be as fast as the source can deliver that data… so we are talking the fastest ingest speeds possible, typically in the order of terabytes per hour. 
  • Best Techniques: To achieve these speeds, I find that having a small holding tank for incoming data to accumulate a meaningful portion of data (usually just a few gigabytes) so that you can apply variable sized dedupe with very quick compression and then storing the data to disk works best.

 

What do you guys think about the variations in approaches?  Have you thought about what happens when data moves between tiers?  I have some thoughts on that but I’d love to hear your input.

How to remove certificate warning message shown while launching iDRAC Virtual Console

$
0
0

This blog post has been written by Shine KA and Hrushi Keshava HS

        Latest iDRAC7 firmware (1.50.50 onwards) now has a provision to store iDRAC SSL certificates while launching iDRAC Virtual Console and Virtual Media. Instead of saving iDRAC SSL certificates to IE / JAVA trust store, You have an option to save certificates you trust in a folder of your choice. This allows you to save several iDRAC certificates to the certificate store. Once you trust and save the certificate, you will no longer get a certificate warning message while launching iDRAC Virtual Console / Virtual Media. This behavior is available on Active-X and Java Plugins. Also, there is no need to save the iDRAC SSL certificate on the certificate store.

        While launching Virtual Console / Virtual Media, iDRAC must first validate iDRAC SSL certificate. If the certificate is not trusted, then iDRAC will check if there is any certificate store configured for iDRAC. If the certificate store is not configured, iDRAC will launch Virtual Console / Virtual Media with a certificate warning window. If certificate store is configured, iDRAC will compare current iDRAC SSL certificate with any of the stored certificates in certificate store. If a match is found, iDRAC Virtual Console will be launched without any certificate warning. If there is no matched certificate, iDRAC Virtual Console / Virtual Media will show a certificate warning, with an option to always trust the certificate. When you always trust the certificate, the current iDRAC SSL certificate will be stored in configured certificate store, and you will not see the certificate warning the next time you launch the iDRAC Virtual Console / Virtual Media.

There are three easy steps to configure certificate store.

STEP 1: Launching Virtual Console for first time

        When you launch Virtual Console for first time and iDRAC SSL certificate is not trusted, a certificate warning window will be shown. This window describes about the certificate information and steps to configure certificate store.

Note: Details will give you more information about certificate like version, serial number, validity and more.

 

STEP 2: Configuring Certificate Store on iDRAC

       After launching Virtual Console, navigates to Tools -> Session Options -> Certificate. Here you can choose the path where trusted certificate need to be stored.

Note: Details will give you more information about Current session’s certificate like version, serial number, validity, and more.

 

STEP 3: Launching Virtual Console with certificate store configured

       After specifying the certificate store location, the next time you launch Virtual Console a “warning security” window will pop up with option to “Always Trust this certificate”. You have an option to save this certificate to store by enabling this checkbox. Once certificate is trusted, you will no longer see the Certificate warning message.

SUMMARY:

            By following these simple steps, you can quickly remove the certificate warning page and have peace of mind that you are accessing your iDRAC securely.

Additional Information:

More information on iDRAC


Google+ Hangout: Deployment Automation on OpenStack with TOSCA and Cloudify

$
0
0

Please register for our OpenStack Online Meetup February 6th, 9.00 - 10.00 am PST (UTC-8 hrs).

TOSCA (Topology and Orchestration Specification for Cloud Applications) is an emerging standard for modeling complete application stacks and automating their deployment and management. It’s been discussed in the context of OpenStack for quite some time, mostly around Heat. In this session we’ll discuss what TOSCA is all about, why it makes sense in the context of OpenStack, and how we can take it farther up the stack to handle complete applications, both during and after deployment, on top of OpenStack.

Presenters:

Nati Shalom, Founder and CTO at GigaSpaces, is a thought leader in Cloud Computing and Big-Data Technologies. Shalom was recently recognized as a Top Cloud Computing Blogger for CIO’s by The CIO Magazine and his blog listed as an excellent blog by YCombinator. Shalom is the founder and also one of leaders of OpenStack-Israel group, and is a frequent presenter at industry conferences.

Uri Cohen, or Head of Product at GigaSpaces. 

This session will be hosted online via Google+ Hangout and IRC Chat (#OpenStack-Community @Freenode).

Citrix's latest desktop virtualization software: XenDesktop 7.5

$
0
0

Citrix has just announced the release of the XenDesktop 7.5, the latest version of their desktop virtualization software. The release embodies several new features that we’re excited about which will help address a new set of needs and use cases, while supporting the continued growth in the use of XenDesktop.

For instance, one of the key features in the announcement is support for hybrid cloud deployments, which enables XenDesktop users to add temporary or new capacity rapidly without incurring capital expenditures. Customers can provision virtual desktops and applications to a private or public cloud infrastructure in conjunction with their existing virtual infrastructure deployments while using the same XenDesktop management consoles and skillsets. This saves overall cost while greatly increasing flexibility and scalability.

A second enhancement is that XenDesktop is now using the FlexCast Management Architecture, sharing this with their application virtualization product XenApp®. This innovation makes it the only unified architecture for application and desktop delivery, which increases IT flexibility while saving cost.

Dell continues to lead the industry in time-to-market with support for new Citrix releases. We were recognized in 2013 with a “Best of Synergy” award at the annual event, Citrix Synergy, in 2013 for our ‘Integrated Storage’ configuration which provides a very cost-effective infrastructure (at under $200/seat) to support XenDesktop deployments. Late last year, Dell introduced VRTX, an office environment-friendly converged infrastructure that brings the benefits of XenDesktop virtualization to smaller and mid-sized customers. And finally, Dell continues to work closely with Citrix and NVIDIA on pioneering solutions that enhance graphics performance, from high end 3D to the basic user. Read more about that here.

Dell has long been a leader in the provisioning of thin/zero clients, especially with Dell Wyse zero clients which are optimized and Citrix Ready™verified for Citrix XenDesktop. Supported by tested reference architectures, we package, sell and support the complete end-to-end solution, including servers, storage, networking, applications, services and your choice of endpoints for better together operation.

For more detailed information, please visit the Citrix announce page here.

There’s security, and then there’s compliance

$
0
0

Years ago, when I took the Pragmatic Marketing class taught by the great Steve Johnson, I learned what was to become one of my favorite “rules to live by.”

“Your opinion, while interesting, is irrelevant.”

The point here is that we often fall into the habit of talking to ourselves and forgetting to engage with those who matter –i.e. customers and prospects. This became crystal clear to me during a recent internal conversation regarding security and compliance.

Occasionally, some of us here at Dell (not all of us, mind you), have fallen into the bad habit of using these terms interchangeably. After speaking to numerous compliance professionals and reading about recent security breaches at some major retailers, however, we once again have had our vision aligned to the real world: namely, that while related, security and compliance do have some slight differences.

At the most basic level, security versus compliance has to do with the difference between the here and now, and reporting on yesterday’s activities tomorrow.

  • Security is the discipline (and part art) of putting defenses in place to ensure the bad guys can’t get in, while, at the same, limiting as much as possible the barriers the good guys must hurdle to do their jobs. That is to say, security is about control ─ controlling your infrastructure, your perimeter and your identities.
  • Compliance, conversely, is the ability to report on what happened as a means to pass some audit. It’s the ability to report on who granted access to whom for what, what that person did with that access, or perhaps who placed that malicious code on the credit card processing servers.

And speaking of audits, those generally come in two varieties as well: internal audits as mandated by the organization itself, and external audits mandated by a government, or “strongly suggested” by an industry consortium, for example, PCI. With respect to PCI (can you say credit card risk?), we continue to see businesses struggle to comply with PCI because of its magnitude. Fortunately, Dell Software has several solutions that make the task of achieving both increased security and PCI compliance much less difficult (as well as compliance for many other regulations). We’ve even bundled this up into what is tantamount to a checklist of PCI requirements and how we can help you meet them. Or, if you prefer, check out this upcoming web seminar, “Preparing for the Inevitable: How to Limit the Damage from a Data Breach by Planning Ahead,” on Feb. 20.

Can you have compliance without security, or security without compliance? We suppose you can, but why would you? You need the compliance to ensure the security is working, and you need the security to, well, secure the environment. So check out the link above if your organization has the dual (but related) challenge of a PCI compliance mandate, and is looking for ways to mitigate security threats.

SharePoint is transforming the way people work and so is Dell Software

$
0
0

While the calendar might say we’re already into February, we’re just beginning a new fiscal year at Dell Software. This is the perfect opportunity to look ahead at the great year we are planning for our SharePoint solutions!

Transformation seems to be the name of the game in SharePoint. No longer is this a platform stood up under someone’s desk because they received free licenses and thought they’d try it out. SharePoint has moved out of its infancy and into maturation with improved capabilities around content collaboration, social communications and application development. According to Microsoft’s Bill Baer, senior product manager on the SharePoint team, in his October 2013 post entitled “SharePoint for the enterprise on-premises, Online, and everything in between”:

  • SharePoint supports 2 out of 3 information workers in the enterprise
  • More than 700,000 developers are building on SharePoint and Office
  • Microsoft is expanding its Office/SharePoint App Store, extending opportunities for code-free customization to everyone in the organization, not just developers

Just as SharePoint has matured in the market, it has also matured inside Dell.  As a reflection of the importance of SharePoint to the overall Dell Software Windows business, it is now a formal part of the broader offering. No longer are organizations just using SharePoint and just needing SharePoint-only solutions. Folks are moving to adopt a complete Windows platform that includes SharePoint, Exchange, Active Directory and Lync.  They are seeking a provider who can bring offerings to support their entire investment and no longer looking to one-trick pony vendors.

SharePoint is a major pillar of Dell Software’s Windows management offering. We’re excited to be moving with the market by delivering and investing heavily in our award-winning SharePoint products as part of a complete Windows migration, code-free customization and management offering. Here’s just a sampling of our investment in SharePoint and our forward-looking optimism for this technology as part of Dell’s broader Windows portfolio:

  1. Dell Software is excited for this year’s planned releases around a complete suite of SharePoint solutions for migration, code-free customization and management, and so are our customers who have love to talk about our solutions.  Here’s just a few examples:
    1. Quick Apps for SharePoint customer testimonials
    2. Site Administrator for SharePoint customer testimonials
    3. Dell Software SharePoint solutions took gold, silver and bronze in the recent 2013 Windows IT Pro awards, including Best SharePoint Product
    4. Dell Software was honored as Microsoft’s first-everOffice/SharePoint App Developer Partner of the Year at the 2013 Worldwide Partner Conference with the release of Dell Social Hub
    5. Dell Software is investing in market research to ensure we’re at the forefront of SharePoint’s transformation (like this one from November 2013 on customization and this one from January 2014 on security and social adoption)
    6. Dell Software is once again a premier sponsor at the SharePoint Conference 2014 in Las Vegas this March as we have been at every SharePoint Conference since its start in 2006 (stay tuned for more from Susan Roper on what we have planned at this year’s event)

Just as organizations seek to transform their SharePoint into the business-critical platform their company and their users need, Dell Software for SharePoint is meeting the market where it’s at with a transformative SharePoint and Windows offering.

Microsoft Most Valuable Professional (MVP) – Best Posts of the Week around Windows Server, MSExchange, SystemCenter and more – #66

$
0
0

"The Elway Gazette"

Hi Community, here is my compilation of the most interesting technical blog posts written by members of the Microsoft MVP Community. The number of MVPs is growing well, I hope you enjoy their posts. @all MVPs If you'd like me to add your blog posts to my weekly compilation, please send me an email (flo@datacenter-flo.de) or reach out to me via Twitter (@FloKlaffenbach). Thanks!

True Words from Aidan Finn: Employers Are Clueless About Hiring IT Staff

Featured Posts of the Week!

Active Directory from on-premises to the #cloud – #WindowsAzure AD whitepapers by James van den Berg

Unable to retrieve all data needed to run the wizard. Error details: "Cannot retrieve information from server "Node A". Error occurred during enumeration of SMB shares: The WinRM protocol operation failed due to the following error: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol. by Didier van Hoye

CREATING CIM SCRIPTS WITHOUT SCRIPTING by Jeffery Hicks

Strange directories in Exchange 2013 on D-drive by Jaap Wesselius

Azure 

Active Directory from on-premises to the #cloud – #WindowsAzure AD whitepapers by James van den Berg

Windows Azure Pack Troubleshooting Guide in German by Daniel Neumann

WINDOWS AZURE FOR YOUR DATACENTER by Thomas Maurer

Troubleshooting Windows Azure Pack - Re-register a Resource Provider by Kristian Nese

Windows Azure Active Directory & ASP.NET Tooling: Ein Blick hinter die Kulissen in German by Robert Mühsig

Einstieg in SignalR 2.0 & Azure Website Websockets in German by Robert Mühsig

Windows Azure Active Directory – CRUD für Users und Groups in German by Robert Mühsig

Cloud OS Community – Windows Azure Pack 2013 Introduction by Damian Flynn

Techdays Berlin 2013 – Deliver Your Cloud Like a Hoster by Damian Flynn

WINDOWS AZURE PACK UPDATE 1 by Stanislav Zhelyazkov

Umzug auf Windows Azure – VMs, WordPress Migration, DNS Änderungen in German by Robert Mühsig

Exchange

Strange directories in Exchange 2013 on D-drive by Jaap Wesselius

Hyper-V

Unable to retrieve all data needed to run the wizard. Error details: "Cannot retrieve information from server "Node A". Error occurred during enumeration of SMB shares: The WinRM protocol operation failed due to the following error: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol. by Didier van Hoye

#Microsoft Platform Ready Test Tool 4.6 for Windows Server 2012 R2 #WS2012R2 #Hyperv by James van den Berg

KB2925727 – Unknown Device (VMBUS) In Device Manager In Virtual Machine For WS2012 R2 AVMA by Aidan Finn

KB2846340 – Duplicate Friendly Names Of NICs Displayed In Windows by Aidan Finn

Antivirus on a Hyper-V Host: Do You Need It? by Aidan Finn

Force removal of virtual machine from HYPER-V cluster by Susantha Silva

Office 365

Delegate365 version 2 by Toni Pohl

Office 365 Needs to Treat the UX as an API if Our Customizations are to Stay Off the Server by Andrew Connell

PowerShell

Write-Host – The gremlin of PowerShell by Jeff Wouters

Compare installed updates and hotfixes by Jeff Wouters

A PowerShell function to get the installed software by Jeff Wouters

A PowerShell function to compare installed software by Jeff Wouters

YOUR FUTURE IN POWERSHELL by Jeffery Hicks

POWERSHELL ESSENTIALS WEBINAR by Jeffery Hicks

CREATING CIM SCRIPTS WITHOUT SCRIPTING by Jeffery Hicks

FRIDAY FUN: MESSAGEBOX WRITER by Jeffery Hicks

Sharepoint 

InfoPath is dead (soon) by Toni Pohl

System Center Core

UR1 For System Center 2012 R2 Is Available – Be Careful by Aidan Finn

Capacity Planning for Hyper-V Using System Center by Lai Yoong Seng

System Center Configuration Manager

UPDATE: How this Inventory Script makes your ConfigMgr life easier by David O'Brien

System Center Dataprotection Manager

Backup and Recovery : Integration Between SCOM 2012 R2 and SCDPM 2012 R2 by Lai Yoong Seng

System Center Operations Manager

System Center 2012 Operations Manager. How to quickly become more efficient by Alessandro Cardoso

System Center Orchestrator

FROM ORCHESTRATOR TO SERVICE MANAGEMENT AUTOMATION: MIGRATION SCENARIOS by Stanislav Zhelyazkov

System Center Service Manager

INSTALLING SYSTEM CENTER SERVICE MANAGEMENT AUTOMATION WORKER ROLE FAILS WITH MSI ERROR 1603 by Stanislav Zhelyazkov

System Center Virtual Machine Manager

Monitoring: Integration Between SCVMM 2012 R2 and SCOM 2012 R2 by Lai Yoong Seng

Installation des UR1 für System Center 2012 R2 Virtual Machine Manager in German by Daniel Neumann

Steps for Updating SCVMM 2012 R2 to Update Rollup 1 by Rob McShinsky

How to update SCVMM 2012 R2 to Update Rollup 1 #SCVMM by James van den Berg

System Center 2012 R2 VMM – KB-Artikel Archiv in German by Daniel Neumann

Windows Server

Desired State Configuration and Local Configuration Manager by Damian Flynn

 

Viewing all 1001 articles
Browse latest View live




Latest Images