• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle
  • Note that I would imagine it as a bit more, like recognizing a pattern where you are going to want to iterate over some iterable and do something super common, I could see an LLM managing to do that better than something like current code completion solutions can. Could also extend it in ways not normally feasible. For example, use something like golang and the IDE can do crazy amounts of completion because so much is specified. In a more loose scenario like javascript or python, the traditional approach can do… some, but a lot more gaps appear since things are too open ended for those approaches to work.

    The thing I cited was like a 12 line function that I figured it would get right. But it failed and hallucinated. I had to resort to like result 7 or 8 in an internet search before someone offered a correct solution, so it’s still matching my LLM experience so far, not any better than blindly clicking the first search result and hoping for the best. It can handle some token swap out compared to a traditional copy/paste, but ultimately you are best served by finding the most well maintained library to offload if it’s not something you really need to write yourself.




  • So for one, business lines almost always have public IPv4. Even then, there are a myriad of providers that provide a solution even behind NAT (also, they probably have public IPv6 space). Any technology provider that could provide AI chat over telephony could also take care of the data connectivity path on their behalf. Anyone that would want to self-host such a solution would certainly have inbound data connectivity also solved. I just don’t see a scenario where a business can have AI telephony but somehow can’t have inbound data access.

    So you have a camera on a logbook to get the human input, but then that logbook can’t be a source of truth because the computer won’t write in it and the computer can take bookings. I don’t think humans really want to do a handwritten logbook anyway, a computer or tablet ui is going to be much faster anyway.



  • Though then have to be careful. I had a requirement to implement a security feature in an unfamiliar language. I gave it a shot and upon reviewing the output, if the code had worked as it wrote it, then it would have had a gaping security hole a mile wide making things worse than they already were, and the bit needed to implement the security was a waste of time. In this case, two wrongs made a right, as it also hallucinated some functions that didn’t exist so the code wouldn’t have even built.

    I can see LLM integrated into the IDE maybe providing a quicker entry to some very obvious logic, but it’s a careful UI consideration in terms of balancing offering helpful capabilities versus making the user undo a bunch of times when it was in fact not helpful.


  • That is what drives me crazy. Society had largely gotten over the ‘write stupidly verbose crap for the sake of professionalism’ and that’s like the first thing they want to bring back. Send me the stuff you would have fed to the LLM, if I want LLM to expand on it, I’ll ask it. I don’t need to wade through a bunch of pointless padding words to figure out what your damn point was.


  • The older generation isn’t going to be getting their end-user AI agents working either. While the next generation may consume more video content than before, all the kids I know still get frustrated at a video that could have just been text unless it is something they want to enjoy.

    The only time voice makes sense is to facilitate real time communication between two humans because they can speak faster than they can type. Conversational approach to use cases often have limits, though that doesn’t preclude AI technology from providing those interfaces, so long as they aren’t constrained to voice. A chat agent that pops up a calendar UI when scheduling is identified as the goal, for example.


  • If a business has an internet connection (of course they do), then they have the ability to host a website just as much as they have the ability to answer the phone. The same software/provider relationship that would provide AI answering service could easily facilitate online interaction. So if oblivous AI enduser points an AI agent at a business with AI agent answering, then the answering agent should be ‘If you are an agent, go to shorturl.at/JtWMA for chat api endpoint’, which may then further offer direct options for direct access to the APIs that the agent would front end for a human client, instead of going old school acoustic coupled modem. The same service that can provide a chat agent can provide a cookie cutter web experience for the relevant industry, maybe with light branding, providing things like a calendar view into a reservation system, which may be much more to the point than trying to chat your way back and forth about scheduling options.


  • The same reason that humanoid robots are useful

    Sex?

    The thing about this demonstration is that there’s a wide recognition that even humans don’t want to be forced to voice interactions, and this is a ridiculous scenario that resembles what the 50s might have imagined the future as being, while ignoring the better advances made along the way. Conversational is maddening way to get a lot of things done, particularly scheduling. So in this demo, a human had to conversationally tell an AI agent the requirements, and then an AI agent acoustically couples to another AI agent which actually has access to the actual scheduling system.

    So first, the coupling is stupid. If they recognize, then spout an API endpoint at the other end and take the conversation over IP.

    But the concept of two AI agents negotiating this is silly. If the user AI agent is in play, just let it access the system directly that the other agent is accessing. An AI agent may be able to efficiently facilitate this, but two only makes things less likely to work than one.

    You don’t need special robot lifts in your apartment building if the cleaning robots can just take the elevators.

    The cleaning robots even if not human shaped could easily take the normal elevators unless you got very weird in design. There’s a significantly good point that obsession with human styled robotics gets in the way of a lot of use cases.

    You don’t need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.

    The API access would greatly accelerate things even for AI. If you’ve ever done selenium based automation of a site, you know it’s so much slower and heavyweight than just interacting with the API directly. AI won’t speed this up. What should take a fraction of a second can turn into many minutes,and a large number of tokens at large enough scale (e.g. scraping a few hundred business web uis).