Photo: Koshiro K / Shutterstock.com
ChatGPT, the viral AI which has recently been nothing sort of a sensation in the tech world meets its rival — Claude. Launched by Anthropic, a startup co-founded by the former employees of OpenAI and backed by Google, Claude promises to be an ‘ethical’ version of ChatGPT.
Claude can perform a range of tasks such as searching for particular texts in documents, summarizing, collaborative writing, answering your questions and coding, among others. What’s interesting is that Anthropic claims that this new improved AI will be “less likely to produce harmful outputs, easier to converse with, and more steerable”. Apart from that, Claude can also take direction on personality, tone, and behaviour.
Anthropic also released Claude Instant which is available at 1/6th the cost of Claude and has been optimized for low latency, high throughput use cases. As far as the pricing goes, both the AIs are available on a pay-as-you-go pricing with an option for dedicated capacity for companies with high usage.
The company plans to introduce more updates in the coming weeks making the AI more helpful, honest, and harmless to the customers. “We think that Claude is the right tool for a wide variety of customers and use cases,” Anthropic told TechCrunch.
Many companies like Quora, Juni Learning, Notion, Robin AI and DuckDuckGo have already begun integrating Claude into their systems. DuckDuckGo launched DuckAssist, a natural language tool to assist in search powered by Claude. Similarly, Quora has offered Claude through Poe, their experimental AI Chat app and Notion is using Claude’s writing and summarizing capabilities to boost the productivity of its users.
Like ChatGPT, Claude was also trained on public web pages up to spring 2021. Anthropic also included a technique known as Constitutional AI which is a set of 10 principles that are not yet released to the public but which provides a “principle-based approach to aligning AI systems with human intentions”. Although these principles are not known, several key points such as beneficence, nonmaleficence and autonomy have been incorporated into these 10 principles.
Claude has its limitations such as being worse at math and the inability to do pro-level programming. Also, AI has invented a new chemical name out of the blue. It is also possible to bypass the security features of the AI via clever prompting.
In late 2022, Google invested around $300 million in Anthropic acquiring a 10 per cent stake in the company. With Microsoft investing billions in Open AI and Google in Anthropic, it is interesting to see the two giants rivalling it out in the AI market.