About Me

Monday, November 29, 2010

Google Earth 6 Brings Integrated Street View And 3D Trees. Yes, Trees. 80 Million Of Them!

 

There’s an easy way to tell that Google Earth is getting so advanced that it’s getting dangerously close to looking like actual Earth: touted new features are kind of humorous. While version 4 brought the sky, and version 5 brought the oceans, now version 6 is bringing trees. Yes, trees. I fully expect version 7 to highlight the addition of dirt.

Kidding aside, the latest version is obviously the best one yet. And trees are obviously a hugely important part of the Earth. To get them into Google Earth, the search giant has made 3D models of over 50 different species of trees. And they’ve included over 80 million of them in various places around the world including Athens, Berlin, Chicago, New York City, San Francisco, and Tokyo. They’re also working with some conservation organizations to model threatened forests around the world.

The other big addition to this latest version of Google Earth is Integrated Street View. To be clear, Google has had a form of Street View in Google Earth since 2008, but now it’s fully a part of the experience. This means that you can go all the way from space, right down to Street View seamlessly. That’s because Google has included their Street View mascot/button, Pegman, in the main navigation controls now. Just like in Google Maps, you just pick him up and drop him anywhere highlighted in blue, and you’ll be taken to the detailed Street View.
And you can now fully navigate the Earth using Street View in Google Earth. Simply use your keyboard or mouse to move around.
Google Earth 6 also makes it easier to discover and explore historical imagery. This feature was added in version 5, but it wasn’t easy to find. Now you’ll be able to see when it’s available right at the bottom of the screen.

Google Earth 6 would definitely be Treebeard’s favorite version of Google Earth yet. Check out more in the pictures and videos below.

Tuesday, November 23, 2010

Indian IT to become hot spot for M&A in 2-3 yrs: Gartner

The next two to three years could see the Indian IT sector become a hot-spot for merger and acquisition (M&A) deals. That's the word from industry body Gartner, which says the 10 deals see so far this year are just the beginning.
With 10 deals in the pocket so far this year, the Indian it sector seems to have caught the consolidation bug. And Gartner says that the next two-three years will only see this number increase. 
Gartner argues that increased competition in the sector will drive players to employ the acquisition route to expansion. And deal sizes may vary widely: from 50 million deals, to deals that cross the USD 250 million mark.
Gartner adds that the most common situation will involve an IT player looking to acquire it vendors with a complete bouquet of service offerings, and not just niche services.
Partha Iyenger, VP and distinguished analyst, Gartner said, “For an English speaking market and to some extent even a European market, India is becoming the center of gravity of that global delivery story. So they are looking for acq in India of tier 2, tier 3 providers. The second is outward M&A from India where the service providers, we've had a lull coz of recession, primarily focused on mainland Europe.”
Gartner also sees greater Indian interest from Japanese players. Reports are already doing the rounds that Japanese companies like Fujitsu, NTT and Hitachi are on the prowl for stakes in mid-cap it firms in India. Gartner says that Japanese IT firms have a limited presence in India so far, and are afraid of losing clients who are interested in growing their Indian footprint. And this will spur M&A interest from Japan.
Peter Sondergaard, Senior VP - Research, Gartner said, “We believe the Japanese providers will look at acquisitions in areas that are important to them and some of the large ones do need presence in India, so that is one area we will see acquisition.”
Deals will not be restricted to cross-border ones. Consolidation among local tier-2 and tier-3 players is also expected to pick up steam. And may even overshadow partnerships in the space.

Wednesday, November 10, 2010

Microsoft SharePoint: Three Deployment Challenges

Enterprise adoption of SharePoint is rapidly on the rise: A new survey from document management company Global 360 reveals that 90% of the survey's 886 respondents currently use SharePoint, with 8% using SharePoint 2010.

Moreover, 67% of those that use SharePoint spread it out enterprise-wide, indicating that SharePoint is not just for the IT department -- it's for all departments.

The survey also highlights how SharePoint is used at organizations. It commonly starts out as a content repository but then transitions to something more dynamic. Sixty-seven percent of survey respondents have extended SharePoint's use to manage document workflows; 66% use it for portal and web content management; and 56% use it to support business processes.
The idea of using content in SharePoint to improve the business is a major theme of the survey. Of the organizations surveyed, 27% say that over half of the documents stored in SharePoint are used to support mission-critical parts of the business.

But despite widespread adoption as well as improvements in search, workflow and social networking in SharePoint 2010, the SharePoint platform does come with its own set of challenges, according to the survey results.

Out-of-the-Box User Experience Not Great
Only 17.6% of survey respondents feel SharePoint delivers a great out-of-the-box user experience and adequately meets their needs. Conversely, 78% describe SharePoint as somewhat adequate to inadequate, and that it requires additional in-house design and development.

When asked what was the biggest challenge with their SharePoint implementations, 21% of survey respondents said, "lack of an intuitive, easy-to-use interface for business users."
And an inadequate user interface usually means trouble, according to the Global 360 report: "Generic user experiences often lead to slower user adoption, lower productivity by users seeking workarounds to applications that do not meet their needs, and higher costs to rollback and customize applications."

Building Business Applications Takes Time and Effort
SharePoint, particularly SharePoint 2010, has made advances in areas such as social media, offline access and better CRM and ERP integration. But according to the Global 360 report, "the gap between what has been delivered and what can be achieved is still dramatic."

How One Company Made SharePoint 2010 More Social

The IT group at tech services company Unisys has been thinking about a social networking platform for two years now.

But some recent factors finally put a plan into action: the arrival of a new CEO two years ago who believed strongly in social networking technology and the arrival of Microsoft's SharePoint 2010 with new social features.

Another motivator for Unisys, which provides various IT services for large corporations and government agencies and has over 25,000 employees worldwide, is that employees and clients have come to expect a "Facebook for the enterprise" as more people use social media outside of work.

"Employees are expecting these social tools in the workplace," says John Knab, director of IT applications at Unisys. "Our senior leadership recognized this, and wanted to apply social tools in a way that could help the business."

Indeed, Facebook-esque features like status updates, microblogs, wikis, community pages, and the ability to tag and share content are spilling into the enterprise. It can be done through corporate microblogging site Yammer and "enterprise 2.0" social software suites from vendors such as SocialText, Jive, Atlassian and NewsGator.

All of these companies' suites stand on their own but they are also compatible with Microsoft's sprawling content management platform, SharePoint.

SharePoint 2010 Better, But Not Social Enough

Microsoft, well aware that nimbler enterprise 2.0 companies are selling social software to enterprises, added more social networking features such as wikis, blogs and tagging into SharePoint 2010, released in May.

These enhancements caught Unisys's eye, a SharePoint customer for six years, and inspired an early upgrade from SharePoint 2007 to SharePoint 2010 through Microsoft's Rapid Deployment Program that began in January and wrapped up in June.

Yet although the social enhancements in SharePoint 2010 are an improvement, Unisys felt that SharePoint's MySites -- profile pages that include social networking features -- were not quite Facebookish enough, and called on enterprise 2.0 vendor and Microsoft partner, NewsGator, to fill in the gaps with more dynamic microblogging, tagging and RSS feeds.

A True Microblogging Platform

"When you get SharePoint 2010 out of the box, it does not create real microblogging. It's just a wall that doesn't broadcast out," says Unisys Community Manager Gary Liu.

Help Improve Ubuntu on 'Bug Day'

By helping to triage reported bugs, even nontechnical users can participate in making the open source Linux software better. 

One of the great strengths of open source software is that it is continuously being scrutinized and improved by users and developers around the world.

Ubuntu, for example, has a global community of participants who are constantly working to make the Linux distribution better by contributing to the development, design, debugging, documentation, support and other aspects of work on the free and open source operating system.

Today, there's a global online event planned in which anyone can donate a little bit of their time to improving Ubuntu. It's called Ubuntu Bug Day, and it's a great opportunity for users and fans to get involved and contribute to the operating system--no training or experience required.

Bug Triage
Ubuntu Bug Days are actually regular events in the Ubuntu world, and they typically take place on a dedicated Internet Relay Chat (IRC) channel called #ubuntu-bugs.
To join a Bug Day, you'll need client software designed for IRC; many options for various operating systems are available for free download. The IRC section of Ubuntu's online documentation lists several possibilities, but if you use a recent version of Ubuntu, Empathy is the default.
On average, more than 1,800 Ubuntu bugs get reported every week, and the primary task on Ubuntu Bug Days is to "triage"--or classify--those reports so that they can be addressed as quickly as possible. Much the way emergency-room patients get triaged the minute they walk in the door so that they'll get what they need as soon as possible, so bug reports are subjected to a similar categorization process.
Individual Bug Days typically focus on triaging a specific category of bug reports, and today it's bugs for which no associated package was listed in the original report. So, those participating in today's Bug Day will be going through bug reports that don't list which software package is affected, and then adding that information. Once that's done, the bug reports can be forwarded on to the appropriate place for fixing.
Open Source's 'Killer App'

Tuesday, November 9, 2010

5 Keys for Full Recovery in the Cloud

The cloud is a natural solution for disaster recovery, but careful consideration must be given before entrusting your data to a sky-high backup repository. Can you recover workloads from the cloud? How well does it scale? What's the nature of its billing system? Is its infrastructure secure? And will it offer complete protection?

While cloud computing is a familiar term, its definitions can vary greatly. So when it comes to online backup, the cloud is an important feature that can play a large role in securing and protecting during a disaster, which I like to refer to as "cloud recovery."
In order to be worthy of this cloud recovery title, a solution should have the following five features, which I have outlined below. 

1. Recover Workloads in the Cloud

There is an old saying in the data protection business that the whole point of backing up is preparing to restore. Having a backup copy of your data is important, but it takes more than a pile of tapes (or an online account) to restore. You might need a replacement server, new storage, and maybe even a new data center, depending on what went wrong.
The traditional solutions to this need are to either keep spare servers in a disaster recovery data center or suffer the downtime while you order and configure new equipment. With a cloud recovery solution, you don't want just your data in the cloud -- you want the ability to actually start up applications and use them, no matter what went wrong in your environment.

2. Unlimited Scalability

If you were buying disaster recovery servers for yourself, you would have to buy one for each of your critical production servers. The whole point of recovering to the cloud is that they already have plenty of servers.
The ideal cloud recovery solution won't charge you for those servers up front but is sure to have as much capacity as you need, when you need it. Under this model, your costs are much lower than building it yourself, because you get the benefit of duplicating your environment without the cost.

3. Pay-Per-Use Billing

I love pay-as-you-go business models because they force the vendor to have a good product. Plus, this make the buying decision much easier -- just sign up for a month or two (or six), and see how it goes.
Removing the up-front price and long-term commitment shifts the risk away from the customer and onto the vendor. The vendor just has to keep the quality up to keep customers loyal.
We also know that data centers are more cost-efficient at larger scale, especially the management effort, and they require constant improvement. In your own data center, you might have some custom configurations, but in the data recovery data center, you just need racks, stacks of servers, power and cooling. You are much better off paying a monthly fee to someone who specializes.

4. Secure and Reliable Infrastructure

Lots of people like to bash cloud providers for security and reliability, but I think they hold the providers to the wrong standard. Although it is fine, in the abstract, to point out all the places where cloud providers don't achieve perfection in security and reliability, as a customer evaluating a cloud vendor, it seems better to compare them to your own capabilities.
I believe that most of the major cloud providers' infrastructures are more secure and more reliable than those of most private data centers. The point is that security and reliability are hard, but they are easier at scale. Having control over your own data center isn't enough -- you also have to spend the money to buy the necessary equipment, software, and expertise. For most companies, infrastructure is a necessary evil. Companies like Amazon and Rackspace do infrastructure for a living, they and do it at huge scale. Sure, Amazon's outages get reported in news, but do you think you can outperform them over the next couple of years?

5. Complete Protection

Remember the "preparing to restore" line? For me, it really comes home in this idea of complete protection. If your backup product asks you what you want to protect, I am already suspicious. My vote is, "get it all." I see lots of online products offering 20GB plans, and to me, they look like an accident waiting to happen. I don't want to know which files I need to protect -- I want to click "start" and know that any time I want, I can click "recover", and there won't be any "please insert your original disk" issues.
The places people normally get bitten by this are with databases (do you have the right agent?), configuration changes (patched your server, or added a new directory of files?), and weird applications (the one that a consultant set up, and you don't really understand how it works). Complete protection means that all of these things can be protected without requiring an expert in either your own systems, or with the cloud recovery solution.

Technology Is Only Part of Disaster Recovery Planning

Having the right technology in place is only one part of an effective disaster recovery plan. What's too often overlooked are people and facilities. If a disaster should strike, it's vital that important data can be saved, but complete recovery planning will take into account that people will need to go to their homes or to other offices and facilities to resume normal operations.
 
Technology Is Only Part of Disaster Recovery Planning Every server must be accounted for, protected, backed up and ready to be brought back online if they lose the physical site that hosts the production system. The problem is that this approach leaves out two-thirds of the total DR planning that the modern organization must do in order to survive a site disaster. Technology is not a small part of your IT DR planning, but it is only one part.
When approaching DR planning, think of it as a trinity of concerns. First you have the technology, which most companies plan for in some way. Secondly, you have people. Your employees that use the data systems have to be taken into consideration. Third, you have facilities -- after all, you do need someplace to house the technology and personnel. 

Technology and People

Technological resiliency is a theory that IT shops deal with daily. They know how the servers are backed up or replicated (most times, both). They know how they'll perform which operations during a restoration and/or failover procedure. While testing DR technology plans still happens woefully infrequently, the planning itself is handled in the majority of businesses these days.
People, on the other hand, are often ignored. Companies plan how all their vital data systems will fail over within minutes to another location, but don't know what to do with the employees who are sitting in the same building as the failed servers. Even if it was something as simple as a bandwidth failure that caused the failover to happen, the users who would normally connect over that same bandwidth are now twiddling their thumbs.

Complete DR planning will take into account that people will need to go to their homes or to other offices and facilities to resume normal operations. They may need VPN connectivity or remote desktop (terminal services) systems in place to allow them to access their applications when their desktops are no long accessible.

They'll also need some method to keep connected to the DR planners so they can be alerted as to where to go and what to do in order to get back online. How will you send a company-wide email when no one in the company can access the email systems? Smartphones may help, but most organizations don't use smartphones for every employee impacted by the disaster. Telephone lists, websites and other tools can help get the word out and get personnel where they need to be. Just make sure that employees are trained on where to look for information well in advance of any disaster.

Whatcha Gonna Do?

Speaking of getting people where they need to be, ensuring that they have somewhere to be is a critical part of the plan. If your DR plan calls for servers to be brought up in a hosted facility, where will your users sit to access those systems? Do you have other offices, or can you rent space at a temporary facility? Where will your clients come to do business with your company, and how will the find you?

While many businesses have embraced the digital age, there are far more who cannot do all of their business virtually. Temporary office space, call centers, phone lines and fax facilities must be planned for well in advance of a disaster. You might also need to arrange transportation and even temporary lodging if critical employees will need to travel in order to reach this new facility. Keep in mind that a single method of travel is just as much of a single point of failure for your DR plan as a single server is.

As you can see, IT DR planning is a large component of a well-rounded DR plan, but it is an invitation for creating a secondary disaster for your business if it is done alone. A true and complete DR plan will allow for people, facilities and technology (think "People, Places and Things") before you're done. Plan for all three effectively, and you will be able to handle disasters on many different levels without fail.

2011 IT Salaries: How Much Are You Really Worth?

Datamation's annual IT Salary Guide lists salaries for various IT professionals, from software developers to IT systems administrators. Additionally, it lists salary increases for specialty IT skill sets. Curious what you're worth on the open market? Find out what you can expect to earn, should you decide to look for a new job in the new year.

The good news: The The 2011 IT Salary Guide demonstrates that IT salary levels are once again headed upward.

The 2011 IT Salary Guide indicates some respectable boosts in average salary. Lead applications developers get a 4.7% increase and software engineers enjoy a 4.1% boost in paycheck. Okay, so IT managers move up a ho-hum 2.5% and project managers rise 2.8%, but Web developers jump 4.6% and System Security Admins levitate a healthy 4% (seems like security is always hot).

The bad news: these rising IT salary numbers are merely compensating for last year's 2 percent to 4 percent decreases. So alas, the net effect is that IT salaries are essentially flat over the last year or so. Yes, times are tough.


Below is salary info for a sys admin position, one of many positions for which data has been gathered and published. To see the other positions, check out the article for complete survey results.

Systems Administrator

2011 average salary range: $53,250 - $83,000.
  • The 2011 salary range is an increase of 3.6 percent over this job's 2010 salary range, which was $51,250 to $80,250.
  • The 2010 salary range was a 2.8 percent decrease under this job's 2009 salary range, which was $52,750 to $82,500.
  • The 2009 salary range is an increase of 3.6 percent over this job's 2008 salary range, which was $51,750 to $78,750.
Add a salary increase for the following skills:
  • Add 6 percent for Basis administration skills (down from 7 percent in 2010)
  • Add 9 percent for Cisco Network administration skills (no change from 2010; down from 12 percent in 2009)
  • Add 8 percent for Linux/Unix administration skills (no change from 2010; down from 10 percent in 2009)
  • Add 8 percent for virtualization skills (up from 7 percent in 2010)
  • Add 4 percent for Windows 2000/Windows 2003/XP/Vista skills (down from 6 percent in 2010; down from 8 percent in 2009)
  • Add 6 percent for Windows 7 skills (down from 7 percent in 2010)
Note: Since these numbers are national averages, they must also be adjusted based on your area of the country.

Salary levels are approximately 10 percent to 30 percent higher in the North East; about average in the South Atlantic (Florida to Delaware); average to modestly lower in the Midwest, Mountain west, and South; and 5 percent to 30 percent higher on the West coast.
IT salaries in large metropolitan areas are higher than the national average. For example, in the following cities they are:

City
Deviation
Boston, Mass. 30% higher
Stamford, Conn. 31% higher
New York, N.Y. 41% higher
Washington, D.C. 30% higher
Philadelphia, Penn. 15% higher
Atlanta, Ga. 10% higher
Miami, Fla. 10% higher
Chicago, Ill. 23% higher
Dallas/Houston, Texas 3% to 5% higher
Irvine, Calif. 24% higher
Los Angeles, Calif. 24% higher
San Diego, Calif. 14% higher
San Francisco, Calif. 35% higher
San Jose, Calif. 32% higher
Seattle, Wash. 18% higher

Monday, November 8, 2010

All-in-One Web Browser and Social Networking Tool

RockMelt is a web browser that has given as much importance to social networking as surfing the worldwide web. It is touted to have been developed from scratch based on how things are on the internet.
  
rockmelt new web browser
rockmelt new web browser

For the past two years, the browser was secretly developed which has spawned numerous assumptions especially with one of its major supporters, Marc Andreessen, who is recognized as the creator of the pioneering web browser in the world.

At this point, RockMelt is allowing selected users a chance to test the browser. Social networking has been integrated into the web browser which utilizes cloud technology to allow users to retrieve favorites from any PC the user wants to use. 

It will be necessary to sign in before RockMelt and its social networking function can be utilized. This function permits users to easily access their social networking accounts on Facebook or Twitter through the web browser itself without going to individual website.

A drop down index provides results of the search, which will be pre-loaded by the browser to enhance the loading speed of the page.
Google is the force behind the search element of the browser, which is derived from the Chromium.

The reaction of typical users on RockMelt’s version of the web browser and its integrated social networking tool is something worthy of note, comparable to another browser that was launched years ago but captured less than one percent of the market. With giants such as Microsoft, Apple, Mozilla and Google taking up most of the market, RockMelt will have its hands full when it enters the browser market.

However, the company and its innovative web browser and integrated social networking function also has some strong benefactors, namely Diane Green of VMware, Bill Campbell of Apple, and the venture capital company of Andreessen.