My Macro Wish List Lens ! :D

Category: By we designworks!

Tokina Telephoto 100mm f/2.8 AT-X M100 AF Pro D Macro Autofocus Lens for Canon EOS

Saving up some bucks to be able to buy this excellent lens. I've read some reviews regarding this lens, although not a Canon L Lens, the quality output is extremely sharp, good for portraits and affordable compare to Canon 100mm f/2.8 USM lens.

For anyone who would like to start on Macro Photography, this is an alternative lens if your on a tight budget. Other Models include:

1. Sigma 105mm f/2.8 Macro Lens
2. Tamron 90mm f/2.8 Macro Lens

But if you would like to go for upper class lens "meaning more expensive" and not considering your budget limit. Then you can buy the Canon EF-100mm f/2.8L IS USM Lens which costs around US$ 950.00. Well, if I have that kind of budget, I would definitely go for it, but for now, I would stick to Tokina Lens.
 

Security Giants Symantec - Hacked ??? What !!!???

Category: By we designworks!

I can't believed this, even so-called "Security Giants" can have vulnerable security measures/settings whatsoever. If this is true, how can we be sure that what we buy from them can really protect us, most especially our sensitive data. Details as per below.

Information from Counter Measures Website:

Back in February of 2009, the Romanian hacker Unu found a SQL injection vulnerability in a Kaspersky tech support portal server based in the USA. That vulnerability when exploited allowed full access to all the database tables, exposing things such as usernames and activation codes.


Well, Unu strikes again and this time Symantec is the unlucky recipient of his attentions, and certainly at first glance it looks worse than the Kaspersky breach. In a new posting on Unu’s blog he details a blind SQL injection-based attack against a Symantec server, the server appears to be responsible for tech support through “Norton PC Expert from PC-Doctor Co Ltd” in Japan.

According to Unu, by exploiting the vulnerability he is able to access a lot of very sensitive information including personal details and product keys (from the symantecstore database table). More worryingly, the screenshots appear to indicate that the attackers is able to browse the entire contents of the server hard drives at will. Unu also notes that both user and employee passwords are available in clear text which, if true, represents a serious oversight, passwords should always be stored encrypted or with a salted hash. It should be noted though that there is no evidence of this particular data other than Unu’s own typed report, no screen shots of this data have been posted.

Although commentators have not always agreed on the accuracy of Unu’s claims, as in the recent claimed compromise of the Barack Obama Donations site; as ever, Unu insists that his activities are only done to warn and raise awareness without saving or otherwise stealing any proprietary information.

“If you remember, in February 2009, Kaspersky faced with a sql injection. Then they had the courage to admit vulnerability, why have my admiration. There was fair play, they quickly secured vulnerable parameter, and even if at first they were very angry at me, finally understood that I did not extract, I saved nothing, I have not abused in any way by the data found. My goal was, what is still, to warn. To call attention.

That being said, expect the curious reaction from Symantec.”

I have made sure Symantec UK and Japan are aware of this information and I am sure they are investigating as I type, but it’s never a bad idea to restate a few best practices for securing web applications:

1. Keep them patched.
2. NEVER store sensitive data in clear text.
3. Get them regularly vulnerability scanned from the inside as well as the outside.
4. Use strong authentication (2 factor) if you are only serving a limited user population or if the data you are holding is particularly sensitive. Cookies can lead to session hijacking…
5. Bounds checking of input data helps to avoid buffer overflows and SQL injection type attacks.
6. Provide access to information on a Need to Know basis and always provide it with Least Privilege.
7. Don’t provide detailed error information to browsers, you don’t expect your customers to debug your application, so don’t give up that error message.

Credits URL: http://countermeasures.trendmicro.eu/symantec-hacked-full-disk-and-databse-access/
 

Facebook hacked - sql injection ???

Category: By we designworks!

Facebook, a website with an estimated of 5 to 10 Million in US Dollars, a number of 250-1000 employees, a website ranked number 8 GLOBALLY by alexa.com’s traffic standards, is not capable of securing their data base. Millions (LOTS OF MILLIONS) of accounts, email addresses and passwords up for grabs by anyone. Let me show you a few concrete examples of vulnerable parameters.

Not only is the website vulnerable to sql injection but it also allows load_file to be executed making it very dangerous because with a little patience, a writable directory can be found and injection a malicious code we get command line access with wich we can do virtualy anything we want with the website: upload phpshells, redirects, INFECT PAGES WITH TROJAN DROPPERS, even deface the whole website.

But let’s see what else is interesting in the data base. Because I was accused for making personal info public, I didn’t concatenate the username, email, and password syntax, but only the userid and session key column along with the date the key was created. If you don’t know what a session key is to facebook read http://wiki.developers.facebook.com/index.php/Authorizing_Applications.


Let’s move on to another SQL injection vulnerable parameter. This time it’s blind sqli. Interesting in the image is that, firstly, the error wich reveals proof that server data can be accessed from this point.


Let’s see another vulnerable parameter. In the image you see the version of the data base software, and the name of the number 55 table in the database wich is : users. How could the columns of this table be named other than email and password ? You guessed it, they are named like that. To be continued.

Credits URL : http://hackersblog.org/2009/02/04/facebook-hacked-o-baza-de-date-cu-milioane-de-conturi-ce-pot-fi-accesate-de-oricine/
 

Red Hat and Microsoft ink virt interoperability deal

Category: By we designworks!

Operating system suppliers Red Hat, which is the leading commercial Linux distro by some measures, and Microsoft, the only maker of Windows, today announced a cross-platform support agreement that will allow operating systems from one to run on the hypervisors of the other.

The interoperability agreement has been forced on the two companies, which are not exactly natural allies or even particularly friendly even if they are mostly civil, by their respective customer bases, software partners, and resellers, explained Mike Evans, vice president of corporate development at Red Hat, and Mike Neil, general manager of virtualization strategy at Microsoft, in a webcast this morning.

The Red Hat-Microsoft deal is short and sweet, and bears little resemblance to the landmark interoperability, licensing, and patent protection deal that Red Hat rival Novell signed with Microsoft in November 2006.

That deal irked plenty in the open source community because of licensing issues relating to Linux and the applications that ride atop it. But it has boosted Novell's financials, with Microsoft buying hundreds of millions of dollars in licenses for SUSE Linux Enterprise Server 10 and distributing them to its Windows customer base.

Testing times

The two Mikes were at pains in the short announcement to make it clear that all that Red Hat and Microsoft have agreed to do were to test, validate, and jointly support each others operating systems when running on each other's server virtualization hypervisors. Red Hat's Evans said the agreement has no provisions for patent rights, or open source licensing, or any financial arrangements beyond the standard testing and qualification fees that Red Hat and Microsoft charge their ISV partners to get certified and an agreement to work together to provide cooperative support for products.

Virtualization is, according to Evans, moving out of the early adopter stage and into mainstream use in data centers. It is still early in the server virtualization game on x64 iron, but both Red Hat and Microsoft think that the lack of an interoperability arrangement between the two companies has been hindering the adoption of server virtualization.

Better virtualization management tools are available now, and the underlying x64 iron is able to do more sophisticated support for memory and I/O as it relates to virtual machines and their hypervisors. And with Gary Chen, research manager for enterprise virtualization software at IDC calculating that Windows and RHEL comprise 80 per cent of all guest operating systems on virtualized servers, now is the time for Red Hat and Microsoft to bury the hatchet. Well, it is more like a paring knife. But you get the idea.

As part of the deal, Microsoft is now a partner in Red Hat's virtualization certification program, and Red Hat has joined Microsoft's server virtualization validation program. The latter was set up by Microsoft last June, and includes Cisco Systems, Citrix Systems, Novell, Oracle, Sun Microsystems, Unisys, Virtual Iron, and VMware; so far, only Cisco, Citrix, Novell, and VMware have fully validated their programs with the Windows stack.

Microsoft will certify Red Hat Enterprise Linux 5.2, and 5.3 will run as a guest operating system on its Hyper-V hypervisor, which is associated with Windows Server 2008; both 32-bit x86 and 64-bit x64 servers will be certified, apparently. And Red Hat is to certify that Windows 2000 Server SP4, Windows Server 2003 SP2, and Windows Server 2008 will all run Red Hat's virtualization hypervisor inside Red Hat Enterprise Linux.

Hypervisor

While Evans did not say it by name, the open source Xen hypervisor is still the default hypervisor with RHEL 5. But with RHEL 6, Red Hat is expected to shift to its own KVM hypervisor, which it acquired last summer when it bought Qumranet. KVM is part of the mainstream Linux kernel, while Xen is not.

But Microsoft already has experience supporting Xen, through its agreements with XenSource, which sponsored the Xen project and which was acquaried by Citrix Systems two summers ago. Presumably, the deal calls for Red Hat to certify Windows Server instances running atop Xen now with RHEL 5 and atop Xen and KVM in RHEL 6. Anyway, Windows guests will be certified atop RHEL in the second half of this year.

Red Hat has a partnership with VMware that validates RHEL runs on its ESX Server hypervisor, but thus far, Red Hat does not have a similar deal with Citrix for its XenServer commercial version of the Xen hypervisor. Mainly because it sells its own implementation of Xen, which it wants customers to use.

And if Red Hat wants customers to use the embedded Xen, and in the future the embedded KVM hypervisor, it needs an interoperability agreement with Microsoft so it can try to out-Xen Citrix. And you can bet that Red Hat wants to get KVM certified to run Windows Server instances well ahead of when it goes commercial in RHEL 6.

Credits URL : http://www.theregister.co.uk/2009/02/16/redhat_microsoft_server_virtualization/

 

Intel's future Xeons to share sockets

Category: By we designworks!

We know what's coming on desktops and notebooks. But what about Intel's 32 nanometer server silicon?

Intel's 32 nanometer process will be used to make a family of desktop, laptop, and server processors known as "Westmere," kickers to the Nehalem chips that will roll out throughout the year. Earlier this week, the company divulged that it was pulling its ramp to 32 nanometer chip making processes into 2009 for desktop and laptop processors, and it gave us a pretty idea of what these chips will look like.

What Intel didn't say is how it will deploy cores or crank up clock speeds on 32 nanometer server chips. Intel has some interesting options, as the Nehalem and Westmere desktop and laptop chips show.

On its desktop lineup, Intel is taking two different paths. With the Nehalem chips, which are implemented in its current 45 nanometer processes, the company is deploying quad-core "Lynnfield" chips, which have two threads per core, and it will offer a similar "Clarksfield" chip for laptops. These chips are similar to the current Core i7 desktop chips, which have been shipping for high-end desktops since last November and will arrive in volume this year across the full PC spectrum.

In the second half of this year, Intel is going to use the 32 nanometer shrink not to increase the core counts in its desktop and laptop chips, but rather to move an integrated graphics controller onto a two-chip package. The future Westmere desktop and laptop chips will have only two cores, and the main memory controller that is integrated on the Nehalem chips is being moved over to the graphics controller that will sit beside the Westmere two-core chip.

That graphics chip and memory controller will be implemented in a 45 nanometer process, which will undoubtedly deliver higher yields and lower costs than if they had been done in 32 nanometer processors as a single chip Westmere package. The processor and graphics chips on the Lynnfield and Clarksfield packages will be connected by a QPI (Quickpath Interconnect) link.

Server processors do not need to have integrated graphics chips on their packages, unless you want to use the GPU as a math co-processor. (Not a dumb idea, provided the programming model is easy). Even if Intel doesn't want to do that, the 32 nanometer shrink for Westmere Xeons could allow the company to do all sorts of things: add more processor cores in the same thermal envelope, crank up clock speeds to boost single-thread performance while holding core counts the same or even decreasing them, or integrate other features (such as network controllers) into the chip package.

In addition to the Westmere roadmap this week, Intel confirmed that the launch of the Nehalem EP processor for two-socket servers was imminent. It's expected before the end of this quarter. The Nehalem EPs (aka Xeon 5000s) will plug into the Tylersburg server platform and use a chipset by the same name, as this roadmap shows:

Back in november, we gave you the feeds and speeds on Nehalem EP motherboards from Super Micro, which makes boards as well as whitebox servers that it and other vendors sell. The Nehalem EP chips, which sport integrated DDR3 memory controllers and which will be the first servers to use QPI, are expected to have somewhere between three and four times the memory bandwidth of existing Xeons and their antiquated front side buses.

Motherboard Glue

Exactly how this will translate into application performance will depend on how sensitive those applications are to memory. The Nehalem EP chips, code-named "Gainestown," are expected to come in two-core and four-core variants, with each core having two threads and with either 4 MB or 8 MB of L3 cache. These chips are basically a version of the Core i7 desktop chip reimplemented with symmetric multiprocessing extensions. Clock speeds are expected to range from 1.9 GHz to 3.2 GHz.

The high-end Nehalem EX processors, code-named "Beckton," will have up to eight cores, will be delivered by the end of the year and will use the "Boxboro" chipset that will also be used in the future "Poulson" Itanium processor. The Boxboro chipset will work with QPI to allow a "glueless" SMP configuration with up to eight processor sockets. Technically, the initial Opterons could do this two, by gluing together four two-way motherboards into a single system image, and it looks like Boxboro will glue together two four-socket machines to get an eight-way. The question with either approach is whether server OEMs will do it. Very few adopted the eight-way Opteron configuration.

The low-end Nehalem EN chips are tweaked versions of the Lynnfield chips used in desktops and made with 45 nanometer processes. They plug into a server platform called "Foxhollow" and use the Intel 5 series chipset used on desktops. If history is any guide, these single-socket server boards will have more I/O slots and possibly more main memory than their desktop counterparts.

Looking ahead to the Westmere generation, the future 32 nanometer chips will plug into the Foxhollow, Tylersburg, and Boxboro platforms. This is obviously something that server manufacturers want very much, since they do not like revving their hardware every year. It looks like Foxhollow gets launched in the second half of 2009, and Boxboro at the end of the year, and Tylersburg should have been here already if this roadmap is to scale.

The Westmere kickers to Nehalem EP chips (which have not been given a code name yet) are due around mid-2010, then, and the Clarkdale chip with its integrated graphics processor gets plunked into single-socket servers in early 2010. Don't expect a Westmere kicker to the high-end Nehalem EX until early 2011, it looks like.

The 32 nanometer shrink from Nehalem to Westmere should allow Intel to get clock speeds up around 4 GHz or so, compared to a little more than 3 GHz with Nehalems and their 45 nanometer processes. Or Intel could boost the core count and keep clocks about the same. The expectation is that Intel will go for speed, not cores. But the company could just as easily put two Westmere chips side-by-side in a single package instead of revving the cores, or leave graphics processors into some Westmere Xeons (as it did for the low-end Clarkdale chip) to use as a co-processor for applications.

It would be interesting to see HPC variants of Westmere chips with the graphics units embedded and then two-chip Westmere packages for regular commercial processing workloads. Intel could put other features inside a package as well - or just make the chip smaller and keep the thermals low, offsetting some of the higher heat that DDR3 main memory kicks out compared to DDR2 memory.

Out beyond that, Intel will launch a new "Sandy Bridge" chip architecture in 2010 or 2011 (it depends on the roadmap you look at) with 32 nanometer processes, and it will eventually shrink this family of chips using 22 nanometer processes in 2011 or 2012.

Credits URL : http://www.theregister.co.uk/2009/02/13/intel_westmere_servers/