February 27, 2013By Reed M. Wiedower
Microsoft’s cloud-based server analysis software, System Center Advisor (SCA), has now been released to the public so that everyone, not only customers with Software Assurance, can take advantage of the agent-based method for ensuring servers and services align to Microsoft best practices.
Many customers have begun to work on implementing robust change management procedures to ensure servers and services are altered only at specific times, with easy rollback methods. Despite this desire, it can be difficult to detect unplanned changes, or unauthorized side effects of changes without a robust monitoring platform. For our customers with System Center Configuration Manager 2012 in place, one can setup pre-built baseline configurations to alert administrators when servers drift away from these goals. For those without SCCM 2012, or for SCCM installations that don’t monitor every server, this lift can be significant. Enter System Center Advisor!
Implementing the hosted SCA Architecture is quite easy, and within minutes, you can be baselining not only your operating system, but key workloads like SQL Server, Active Directory, Exchange, SharePoint, Lync and even Hyper-V. If a configuration change occurs, you’ll be alerted once a day as to the inconsistency. This doesn’t take the place of a robust monitoring system such as System Center Operations Manager looking for flaws (low disk space, high CPU utilization, etc.) but does ensure that registry changes and service modifications that could cause a loss of service down the road are noted and mitigated the day they occur, rather than months down the road when a problem appears. With SCA in your back pocket, you can implement a robust change management procedure and ensure it’s working, as well as ensure all your critical services are running the same way Microsoft runs them. When best practices change, as they often do, SCA will update automatically to alert your organization.
Now that SCA is available for all, give New Signature a call today to learn how you can take advantage of this free, hosted service to keep your environment in perfect working order.
February 26, 2013
Christopher Hertz Named 2013 Locally Grown Honoree by Network for Teaching Entrepreneurship of Greater Washington, DCBy New Signature
Christopher Hertz, founder and CEO of New Signature, headquartered in Washington, DC, will be honored as a “Locally Grown Hero” at NFTE DC’s Dare to Dream Gala on April 10, 2013 in Washington, DC.
Christopher was among five area business owners named as the 2013 Locally Grown Honorees by the Network for Teaching Entrepreneurship (NFTE) Greater Washington Region. The winners are being recognized for building companies in the Washington region and making significant contributions to their respective communities.
“The Locally Grown awards allow us to recognize people from our community who are leaders in both business and public service. We are thrilled to have such an accomplished group of honorees and we look forward to them becoming engaged as mentors for our students,” said John Hasenberg of Merill Lynch, Chairman of NFTE’s Advisory Board, DC Region.
Since 1987, the Network for Teaching Entrepreneurship (NFTE) has been inspiring young people to pursue educational opportunities, start their own businesses, and succeed in life. NFTE is the only global nonprofit organization solely focused on bringing entrepreneurship education to low-income youth. To date, NFTE programs have served close to 400,000 young people globally and 24,000 locally. Currently, NFTE programs are active in 10 countries and 21 U.S. states. Here in the DC Region, NFTE has programs in 22 schools and reaches close to 900 young people annually.
New Signature invites you to help us celebrate the area’s young entrepreneurs and future leaders who Dare to Dream by attending the Gala or sponsoring the Gala. You can purchase tickets and review sponsorship opportunities online, or for more information or to sponsor the 2013 Dare to Dream Gala please contact Kara Johnson, Director of Development, at email@example.com.
By Jessie Collins
You would be hard-pressed these days to find a user experience designer who does not take a user-centered approach to designing web sites. Heck, who are we building these sites for if not the users? Ideally, a large part of the discovery effort is focused on listening to users and learning what they would like out of this particular online experience. In many cases, however, it can be difficult to communicate directly with potential users. Maybe there isn’t time to assemble real users into a focus group. Maybe the budget can’t support an extended series of stakeholder interviews. Maybe the client feels as though direct contact with users is unnecessary. Regardless of the reason, your user-centered site now has to be built without the user’s direct input.
Happily, there is an exercise that can help you and your client identify the needs of (and empathize with) these elusive users. The empathy map was developed by XPLANE as a method for understanding audiences in any business ecosystem. The exercise helps participants gain a deeper level of understanding of a user within a particular context. Requiring little preparation or overhead, the empathy map provides a forum for stakeholders to focus more on the users’ goals rather than their own.
Putting it in Action
- Empathy mapping is best performed as a group. Assemble the core project team and any key stakeholders – especially those who work directly with your potential audience.
- Using a white board or large sheet of paper, draw a big circle that represents the user. Keep the drawing simple, but give your user a name or some characteristics (hair, glasses, eyes, nose, mouth). Providing a little bit of detail helps participants identify with the user and think of them as an actual person.
- Divide the circle into sections that represent the different sensory experiences. You want to focus on what the user is thinking or feeling, saying or doing, seeing, and hearing. Label each of the sections (see image).
- Put the user in a particular context and/or pose a question to them. Maybe you want to think about how they would approach a particular online task or how they are going to make a purchasing decision.
- Arm the stakeholders with sticky notes and a pen or marker.
- Now it’s time to empathize. Focus on one particular sensory area. Ask the stakeholders to put themselves in the user’s shoes and write on their sticky notes what the individual is thinking/feeling, saying/doing, seeing or hearing in the context that has been provided. Instruct participants to focus on real, sensory experiences and use language that represents the user rather than themselves. Remember, the idea is to imagine you are the user.
- Taking turns, ask the stakeholders to place their note on the map and explain their contribution.
- Repeat steps 6 and 7 until you have visited all areas of the map.
- Now ask participants to brainstorm about the user’s pain points in this experience. Capture these on the map.
- Finally, ask participants to consider the gain’s for the user – what’s in it for them. This should form the value statement for the online experience.
Analyze the Findings
Before dismissing the group, talk a bit about what you see in the map. Is there agreement across the stakeholders? Is there some disagreement? What might this mean for your design? Do participants feel as though they truly empathized or are they still focused on their own agenda? If participants struggle with the exercise or are unable to complete sections of the map, that may be a good indicator that they really don’t know the user as well as they might have thought.
Armed with the map and any additional thoughts supplied by the follow-up discussion, a user experience professional can further analyze the user’s specific needs and goals. This input can flesh out user personas, build user stories and support the development of general site goals. In the end, you will have a much better understanding of your user.
While it’s always a good idea to gather input directly from your audience, the empathy map can strengthen your understanding of and help build internal support (and empathy) for your users.
This is just one of the tools we use to build an optimal online experience. To learn how New Signature can help your organization successfully craft an online experience, please contact us at firstname.lastname@example.org.
February 21, 2013By Peter Day
In “FIM R2 Best Practices Volume 1″ David Lundell and Brad Turner set out to provide a thorough introduction to the architecture and installation of Forefront Identity Manager 2010 R2. The book was originally published in 2010 for the original release of the product, but has been republished in 2012 for the new R2 version. There are helpful alerts throughout the book that highlight where things have changed in the R2 version.
The book starts off with a brief discussion of the challenges of Identity Management in general and then goes straight into a discussion of the history and architecture of Forefront Identity Manager 2010 (FIM). This includes a brief mention of the interesting BHOLD suite of products that supports features such as Role-Based Access Control (RBAC) and attestation, though unfortunately BHOLD is not covered again later in the book.
There is a good chapter on the possible topologies for a FIM implementation that covers the topic in a very accessible way, with plenty of diagrams and tables to illustrate key points. Along the way the authors bring in their knowledge of Microsoft SQL Server 2008 where that helps to illustrate a FIM topic or procedure. There is also a chapter on sizing FIM for those working in very large environments already, or for those who expect their organization might grow greatly over time – some choices you have to make affect whether you will be able to scale out your solution in the future and these are highlighted for you.
Also included is a whole in-depth chapter on the pre-requisites of what you need to get installed and configured before you even start to install FIM 2010 R2. The chapters on installing the pre-requisites and FIM itself both have plenty of screen shots and illustrations to help you visualize the process and link the book with what you see yourself during your own installation.
Much of what is in the book may be available in different places on the web, however the authors do a great job of pulling relevant information together into a coherent story and also contribute information they have learned from their real-word experiences with FIM and Identity Management. The one part of the product that is completely omitted is FIM Certificate Management (CM) – the authors are up front about this and state it is because FIM CM is normally a separate project.
Though the book is self-published it is still of very good quality and definitely serves the purpose stated in its title. As far as I know the book is only available from www.lulu.com at the link below.“FIM R2 Best Practices Volume 1: Introduction, Architecture And Installation Of Forefront Identity Manager 2010 R2″http://www.lulu.com/shop/david-lundell/fim-r2-best-practices-volume-1-introduction-architecture-and-installation-of-forefront-identity-manager-2010-r2/paperback/product-20424725.html
If you are going to be working with Forefront Identity Manager or studying for the Microsoft exam on the product (exam code: 70-158), then this book is definitely worth the $25 price tag. I certainly look forward to the publication of volume 2 in the series and hopefully more after that on the use and troubleshooting of Forefront Identity Manager.
February 19, 2013By Reed M. Wiedower
Yes, it’s true: Windows Intune can deploy standard applications to your legacy desktop. This functionality, while incredibly useful, has been around for quite some time. The main difference with Wave D of Intune is that the licensing has moved from a device-centric model to a user-centric model, which means that every license of Intune will allow you to install it on five different devices.
What’s even more exciting with the latest version of Windows Intune is the ability to deploy modern applications to a variety of devices, including iOS devices, Windows RT machines (The Surface), Windows 8 Phone and Windows 8 itself. The process varies slightly between each device, so today I’ll walk through the procedure for Windows 8 and Windows RT.
The application deployment mechanism for all modern Windows 8 devices (whether Windows 8 Phone, Windows 8 RT or regular Windows 8) is a simple app called the “Company Portal”. This application can be purchased from the online stores and enables end-staff to consume corporate or public applications. For other devices, you’ll continue to use the Intune company portal web page. If you go to the webpage using a Windows 8 device, it will seamlessly transition you to the company portal application, making it a breeze to use.
The first question you’ll need to ask is: do you wish to push out a modern application that lives in an online device store, or something that you have crafted in-house? This question determines the application deployment strategy, as apps that are purchased through an online store (whether the Windows Store, the Windows Phone Store, or even the Apple Store) are different than ones developed in-house.
For apps that live in the online stores, you can use Intune to push out external links to the applications themselves. You’ll need to work on the financial implications of distribution via another method. Because your iOS and Android users will need to go to the webpage regardless to consume the apps, you’ll likely want to communicate to all staff to go to the website, which will allow those on Windows 8 devices to obtain the company portal application.
For custom in-house applications, you will have full control over the packaging process, and it will mirror the existing system currently used to build applications. Regardless of the underlying OS (iOS, Android or Windows) you have the ability to build custom packages and deploy them to the device. The only decision point you’ll need to take into consideration is whether you are deploying to a Windows 8 device, as these devices will require use of the Company Portal.
The next question you’ll need to ask once you’ve determined what type of applications you wish to push, is the overarching management strategy for mobile devices. In the latest version of Windows Intune, you have two choices: to use System Center Configuration Manager 2012 SP1, or to use a pure Windows Intune solution. This is the very first choice you’ll be presented with when you begin to login to the mobile device management page, so it’s important to realize that if you choose to go with a pure Intune solution, you cannot currently change your mind at a future point without redeploying the agents. If you’ve already got SCCM 2012 in place, updating to Service Pack 1 will enable the two products to work in concert.
To select the management choice, simply select “Set Mobile Device Management Authority” under the “Tasks” item within the Mobile Device Management section of Intune.
Now that you’ve chosen your mobile device management platform, it’s time to get down to the nuts and bolts of setting up remote management of mobile devices. We’ll cover Windows RT in this post so that you’ll have a working RT-management capable environment by the conclusion.
To begin managing Windows RT devices go into the “Windows RT Management” section within the “Mobile Device Management” page. Once there, you’ll see three steps to complete, all of which are optional in nature, with two of them only impacting internal application deployment. The first step is to add a DNS record to enable auto-enrollment for your devices. You’ll simply need to add an alias for enterpriseenrollment.domainname.com to point to enterpriseenrollment.manage.microsoft.com. If you perform this step, any device connecting to your tenancy will automatically be able to provision. Skipping this step (for testing purposes, for instance) will force potential device owners to manually enter the full DNS name of enterpriseenrollment.manage.microsoft.com when enrolling.
The second and third steps only apply to applications that you have developed internally (if you are only using externally linked apps, you can skip these two). The first is to procure a sideloading key; by default Windows RT devices do not allow sideloading.
What is sideloading? In Windows 8 and Windows RT, sideloading is a technique used to install software without using the Windows Store. By default, Windows 8 Enterprise allows any domain joined machine to sideload applications, so most organizations don’t need to do much work to enable this functionality. Windows RT, on the other hand, doesn’t allow domain joining, so the story gets more complicated. Fortunately, sideloading enterprise line-of-business apps for Windows RT is a fairly narrow use case. Why? Because by default, most sideloaded applications in an enterprise environment will be for the legacy desktop mode with Windows 8, which as we mentioned earlier is enabled easily through any domain-joined machine. If an organization has its own developers (1), and codes a modern, non-desktop-based application (2), yet is unwilling to sell that application (for free) in the Windows Store (3), and has the need to run the application on Windows RT in addition to Windows 8 (4) they’ll need to sideload. Each of those gates reduces the overall number of impacted machines.
If you find yourself at this point at still need to sideload, you’ll have to purchase a specific sideloading key through your regular volume licensing framework. Keys may be purchased via the regular open or select methods; if you already have an Enterprise Agreement for Windows 8 with Software Assurance, you’ll be able to access your keys without an additional purchase. The rules are detailed here (in pdf format) with one important caveat: purchasing through Open or Select may only be done in blocks of 100, so organizations with less than 100 Windows RT devices will likely prefer the Windows Store method. The rough cost ends up being around $25-28/device. Once you’ve downloaded your key from the Volume Licensing Service Center, you can upload it into the Intune console.
Now that you’ve entered your key, the final step to enable sideloading is to publish a code-signing certificate. If your endpoint already trust the Microsoft certificate authority (or your own) this step isn’t necessary, but for devices that are disconnect from your domain, such as Windows RT, it may be necessary to include a publicly signed certificate in this space.
Now that you’ve completed these three steps, you’re ready to get applications loaded up on your device. For Windows RT, there are a few steps you’ll need to perform to ensure the device is enrolled properly, and able to consume the applications. From the Windows RT device, you’ll need to login as a local administrator (this is important) and then configure your “Company Apps” setting within the operating system itself. “Company Apps” is not exposed to non-administrators, and the easiest way to access it is to type “Company Apps” into the start screen, and then select the “Settings” area within the Search Charm.
If you’ve configured DNS settings on the tenancy, you’ll just need to login and then click “install the management tool” to get the Company Portal application installed. You can, of course, also manually enter the management server name and then install the tool.
Once “Company Apps” is configured, and “Company Portal” has been installed, you’ll all set to be able to consume applications, even from a non-local admin account. Simply login with your regular account, fire up the Company Portal application, and you can begin consuming published content! We’ll have a follow-up post on the various ways software can be loaded up into the console for consumption not only by Windows RT, but Windows 8, iOS, Windows Phone 8 and Android devices.
Need a company that’s been there before to assist? Reach out to New Signature, Microsoft’s Intune Partner of the Year, to learn more about how Windows Intune can help drive down your costs and drive up your customer’s satisfaction.
February 15, 2013By Reed M. Wiedower
Value Added Resellers (VARs) occupy a unique niche within the Microsoft ecosystem. They must be trusted advisors for companies, providing timely licensing costs that meet the increasingly technical design requirements of organizations, all while adding value on top of the software sale itself. Six months ago, Director Jess Givens launched New Signature’s Procurement division because she felt there was a gap between the purchasing programs of other groups and the high-level of customer service New Signature’s customers had grown accustomed to. Now, we are happy to report that New Signature has been invited by Microsoft to join the prestigious VAR Champions Club. It is a huge honor, and all the greater that we have hit this goal only six months into our affiliation. For the 2012 year, New Signature was ranked number 43 out of over 1700 Microsoft Partners in the Mid-Atlantic region for licensing sales, and we ranked number 137 out of over 5200 partners for the entire East coast region.
When Jess started the division, she was unsure how quickly she could grow it while maintaining our high standards, but the process could not have gone more smoothly. Not only does she get to carry over our customer service to a new area, but the fast paced environment and requirement for accuracy have played to New Signature’s strengths. It’s always exciting to keep up with the latest and greatest in licensing, and New Signature had married this with also achieving Gold in the Microsoft Partner Network’s Volume Licensing Competency. 2013 should be an even more exciting year!
If you’re looking to make a software or hardware purchase, and need a strategic partner, do not delay: reach out to Jess or the rest of the New Signature team and we’re confident your experience will be a quick, thorough and technically excellent one.
February 14, 2013By Ben Pahl
Microsoft recently enabled subscription-based licensing for Office 2011 desktop software via Office365 subscription.
The latest version of Office 2011, 14.3, now features a sign-in option to activate the product alongside the old license-key activation method. OS X 10.6 and above is required for this feature. Note that Office 2011 *can* be installed on OS X 10.5.8, but the subscription licensing option isn’t supported prior to OS X 10.6. It almost goes without saying, but your Office 365 subscription must include the Office Pro Plus for the specific person who will be using it.
Using this new feature is as easy as slipping on ice. To start, install Office 2011 on a Mac. After installation is complete open an Office program and you will be greeted with an activation prompt. Click to use the Subscription option.
This functionality works for Office 365 Home, University and Commercial users.
Type in your Office 365 email address and click Next.
Next, type in your password and click Sign In. *Note that we have successfully tested this functionality with Federated Office 365 authentication as well as Office 365 cloud-based authentication.*
You will be greeted with a final screen confirming activation and you’ll be all set!
It’s not yet clear how to change the licensing for Office 2011 subscription-based installations after it’s setup. However a complete uninstall/reinstall would likely take care of removing Office 365 subscription-based licensing from a Mac. Existing methods of modifying licensing plist files to reset license status doesn’t seem to work after the license model is changed to subscription-based.
February 13, 2013By Joshua Brechbuehl
Recently, we investigated a performance issue affecting a Microsoft SQL Server hosting a System Center Service Manager 2012 Data Warehouse database. The performance issue seemed to center itself around a lack of available memory on the server. After digging a bit deeper, it was discovered that SQL Analysis Services (SSAS) was consuming a much higher amount of system memory than was originally anticipated when the server was built and sized. What we found was SQL Analysis Services manages system memory much differently than SQL Database Services.
For those of you who are already accustomed to performance tuning your SQL Server Instance, you already know that you can access the correct window by starting SQL Management Studio, right clicking on the SQL Instance you would like to view, selecting properties and clicking Memory from the left hand pane.
As you can see, adjusting the Minimum and Maximum memory usage for a SQL Database Services Instance is quite simple and self-explanatory. Once set, SQL will use as much memory as it needs up to the Maximum amount. This is especially useful if you have a multi-purpose SQL server hosting multiple SQL Database Instances.
In SQL Analysis Services, however, this memory configuration is handled very differently. SQL Analysis Services has a special memory “cleaner” background thread that constantly determines if it needs to clean up memory. The cleaner looks at the amount of memory used. The following basic processes are used by the cleaner to control amount of physical memory used by SSAS:
- If the memory used is above the value set in the TotalMemoryLimit property, the cleaner cleans up to this value.
- If the memory used is under the value set in the LowMemoryLimit property, the cleaner does nothing.
- If the memory used is between the values set in the LowMemoryLimit and the TotalMemoryLimit properties, the cleaner cleans memory on a need-to-use basis.
To view or modify the SQL Analysis Services Instance memory configuration, follow a similar method to Database Services above. Open SQL Management Studio and right click on the SSAS Instance you’d like to view, select Properties and the window below will appear. Notice that HardMemoryLimit is set to 0 by default.
If the value specified in any of these properties is between 0 and 100, the value is treated by SSAS as a percentage of total physical memory. If the value specified is greater than 100, the value is treated by SSAS as an absolute memory value (in bytes). Note that when Analysis Server is processing, if it requires additional memory above the value specified in TotalMemoryLimit, it will try to reserve that amount, regardless of the TotalMemoryLimit value.
In the troubled SQL environment, we found that while the TotalMemoryLimit was set to a percentage of 25(%), the HardMemoryLimit was set to 0, which is equivalent to unlimited. This is what caused our SSAS memory to increase uncontrollably.
We reset the Hard Memory Limit to 30(%) and restarted SQL Analysis Services and monitored memory usage.
This solved our problem of the runaway System Center Service Manager SQL Analysis Service.
**NOTE** The percentages shown in the screenshots above may not reflect Microsoft best practices. Every environment is different and you should research, test and validate settings in your own test environment prior to pushing changes to production.
February 12, 2013By Peter Day
What is a blue screen?
If Microsoft Windows has a serious problem it might give a “blue screen” error. When this happens all the Windows close and a dark blue screen with white text appears. This is commonly referred to as the “blue screen of death” or BSOD, and your only option to recovery your computer is to record the error message and reboot the computer.
How is a blue screen useful?
The blue screen often contains information that is essential for IT staff to be able to diagnose and repair the cause of the error. However, most end users will reboot the computer when they get a blue screen and that diagnostic information is lost.
How can you recover the diagnostic information?
If you reboot after a crash after your computer starts then you can review the System Event log for an entry of type “Error” that resembles the one below, and informs you a “dump” file has been created. The dump file will have a .DMP extension, and the event log will give it’s location on the local drive.
Sometimes there is no further useful error information in the Event Logs so we need to analyze the dump file instead.
How can you analyze the dump file?
At this point I would download and run a useful tool called “Blue Screen View” from Nirsoft. It is Freeware and available at the following site: http://www.nirsoft.net/utils/blue_screen_view.html
Once installed, this handy tool will scan the C:\Windows\MiniDump folder for .DMP files and list them in the top pane of the program. As you click on each .DMP file in “Blue Screen View” the corresponding blue screen is re-created in the bottom pane of the Window. You can then see all of the diagnostic information from the blue screen error, as shown below. (Note, that if a MiniDump has never been created then the MiniDump folder will not exist).
In the example above the mention of “watchdog.sys” led to a web search that pointed towards issues with the graphics system, so now I had a lead on where to start further troubleshooting.
You’ll see in the title bar of the above example that the MiniDump files being analyzed are in the C:\temp\ folder, this is because I copied them from a failing machine for analysis on my management workstation. To do this you need to choose “advanced options” from the Options menu to change the folder being referenced, as shown below:
What if I need more help?
Collectively, New Signature staff have decades of experience in troubleshooting and repairing Windows systems, so please give us a call if you need help recovering from a blue screen.
February 4, 2013By Joshua Brechbuehl
Microsoft System Center Service Manager (SCSM) 2012 delivers standardized, compliant and automated IT as a Service. SCSM can be used for not only managing Incidents, but also managing Change Requests and providing a Self-Service Portal and Knowledgebase for customers.
The most anticipated new feature arriving in SCSM 2012 Service Pack 1 (SP1) is the charge-back feature. In 2013, IT continues to transition away from traditional IT to a cloud-optimized IT model. In traditional IT, infrastructure was largely physical, service level agreements (SLAs) were typically periods of weeks and months, and capacity was owned and managed by the consumers. Cloud optimized IT is changing this behavior and allowing on-demand solutions, shorter SLAs and, unfortunately, exacerbating problems caused by typical consumer behavior: over-subscription and under-utilization of IT resources. Luckily, SCCM 2012 SP1 includes a feature which can assist in solving this problem.
In SCSM 2012 SP1, charge-back allows IT organizations to communicate more effectively with consumers about how they consume resources and capacity. Consider the Virtual Machine Manager (VMM) Cloud-based SLA model. Each cloud may have a different SLA, and may be managed and priced differently. For instance, a messaging cloud including Microsoft Exchange and Microsoft Lync may be managed with a different SLA than a backup and recovery cloud. Because of these differences in SLA, SCSM 2012 SP1’s charge-back model is flexible and allows for multiple price sheets to be available. A price sheet can be created and assigned to a VMM cloud.
SCSM 2012 SP1 charge-back reporting is online analytical processing (OLAP) cube based, which allows easy customization of the reports that can be delivered to IT consumers. These reports display infrastructure use and cost, including, for example, VM Days of Use, VM Memory Use, VM Storage Use and Total VM Cost. Such detail is key to allowing a dynamic cloud IT environment to operate efficiently and cost effectively.
What’s New in SCSM 2012 SP1
- A charge-back model can be created for use in organizations where accounting and recouping for time and costs is required.
- Improved Operations Manager Integration.
- Operations Manager SP1 Agent installed automatically as part of SCSM SP1
- SQL 2012 Support
- Windows Server 2012 Support (Except for Self-Service Portal SharePoint Web Parts)
Looking to implement System Center Service Manager? These new updates through Service Pack 1 bring some significant enhancements. Reach out to New Signature to learn how you can take advantage of them today!