ah yes, i forgot that this article was written specifically to address you and only you
I’m gay
ah yes, i forgot that this article was written specifically to address you and only you
I appreciate your warning, and would like to echo it, from a safety perspective.
I would also like to point out that we should be approaching this, as every risk, from a harm reduction standpoint. A drug with impurities that could save your life or prevent serious harm is better than no drug and death. People need to be empowered to make the best decisions they can, given the available resources and education.
Venus rhymes with a piece of anatomy often found on men. Obviously they got it backwards
Been thinking about picking this one up
It’s FUCKING OBVIOUS
What is obvious to you is not always obvious to others. There are already countless examples of AI being used to do things like sort through applicants for jobs, who gets audited for child protective services, and who can get a visa for a country.
But it’s also more insidious than that, because the far reaching implications of this bias often cannot be predicted. For example, excluding all gender data from training ended up making sexism worse in this real world example of financial lending assisted by AI and the same was true for apple’s credit card and we even have full-blown articles showing how the removal of data can actually reinforce bias indicating that it’s not just what material is used to train the model but what data is not used or explicitly removed.
This is so much more complicated than “this is obvious” and there’s a lot of signs pointing towards the need for regulation around AI and ML models being used in places it really matters, such as decision making, until we understand it a lot better.
big weird flex but okay vibes except actually not okay
Like most science press releases I’m not holding my breath
Game changer for smart watches if this turns out to work and scale well
Just a heads up, we probably don’t have a ton of Russian speakers on Beehaw. This might do better posted elsewhere.
Okay I understand what you are saying now, but I believe that you are conflating two ideas here.
The first idea is about learning the concepts, and not just the specifics. There’s a difference between memorizing a specific chemical reaction and understanding types of chemical reactions and using that to deduce what a specific chemical reaction would be given two substances. I would not call that intuition, however, as it’s a matter of learning larger patterns, rules, or processes.
The second idea is about making things happen faster and less consciously. In essence, this is pattern recognition, but in practice it’s a bit more complicated. Playing a piece over and over or shooting a basketball over and over is a rather unique process in that it involves muscle memory (or more accurately it involves specific areas of the brain devoted to motor cortex activation patterns working in sync with sensory systems such as proprioception). Knowing how to declare a variable or the order of operations, on the other hand, is pattern recognition within the context of a specific language or programming languages in general (as a reflection of currently circulating/used programming languages). I would consider both of these (muscle memory and pattern recognition) as aligned with the idea of intuition as you’ve defined it.
Rote learning is not necessary to understand concepts, but the amount of repetition needed to remember an idea after x period of time is going vary from person to person and how long after you expect someone to remember something. Pattern recognition and muscle memory, however, typically require a higher amount of repetition to sink in, but will also vary depending on person and time between learning and recall.
it helps develop intuition of the relationship between numbers and the various mathematical operations
Could you expand upon this? I’m not sure I understand what you mean by an ‘intuition’.
I want to start off by saying that I agree there are aspects of the process which are important and should be learned, but this is more to do with critical thinking and applicable skills than it has to do with the process itself.
Of note, this part of your reply in particular I believe is somewhat shortsighted
Cheating, whether using AI or not, is preventing yourself from learning and developing mastery and understanding.
Using AI to answer a question is not necessarily preventing yourself from learning and developing mastery and understanding. The use of AI is a skill in the same way that any ability to look up information is a skill. But blindly putting information into an AI and copy/pasting the results is very different from using AI as a resource in a similar way one might use a book or an article as a resource. A single scientific study with a finding doesn’t make fact - it provides evidence for fact and must be considered in the context of other available evidence.
In addition, learning to interact with and use AI is a skill in the same way that learning to interact with and use a phone, or the internet, or an app are all skills. With interaction layers becoming increasingly more abstract (which is normal and good), people need to have skills at each layer in order for processes to exist and for tools be useful to humanity. Most modern tools require people who can operate on different levels with different levels of skill. While computers are an easy example since you are replying on some kind of electronic device which requires everything from chemists to engineers to fabrication specialists and programmers (hardware, software, operating system, etc.) to work, this is true for nearly any human made product in the modern world. Being able to drive a car is a very different skill set than being able to maintain a car, or work on a car, or fabricate parts for a car, or design parts for a car, or design the machinery that manufactures the parts for the car, and so on.
This is a particularly long winded way of pointing out something that’s always been true - the idea that you should learn how to do math in your head because ‘you won’t always have a calculator’ or that the idea that you need to understand how to do the problem in your head or how the calculator is working to understand the material is a false one and it’s one that erases the complexity of modern life. Practicing the process helps you learn a specific skill in a specific context and people who make use of existing systems to bypass the need of having that skill are not better or worse - they are simply training a different skill. The means by which they bypass the process is extremely important - they could give it no thought at all or they may critically think about it and devise a process which still pays attention to the underlying process without fully understanding how to replicate it. The difference in approach is important, and in the context of learning it’s important to experiment and learn critical thinking skills to make a decision of where you wish to have that additional mastery and what level of abstraction you are comfortable with and care about interacting with.
It’s hard to engage with such short replies. What parts confuse you? What do you need more guidance on? Is there anything else that is unclear about my response and how we value community around here?
I think she wants you to just rack up a bunch of medical debt so she can cancel it, gotta think in loopholes like the big companies
Beehaw may not be the right space for you if you’re unable to consider context. Beehaw is explicitly a community, a safe space, and somewhere where context absolutely matters. We don’t believe it’s possible to have a healthy community where people don’t see each other as complex humans. We talk about this, quite a bit in our docs, for example in the the doc titled Beehaw is a community we talk about how community is a necessary part of this platform and in the doc titled Beehaw, Lemmy, and A Vision of the Fediverse we talk about how we want to be more like a village than we do a train station (and link to a fantastic article about this) and that’s a direct reflection of the importance of social ties and connections to running a healthy community.
I’m certainly not saying that you should leave, but I am typing all of this up because I need you to understand what our values are around here. Some of your content and your interactions have already been reported by multiple people - I mention this because I think it’s a reflection of your attitude towards your purpose here and how you are interacting with the space. I’ve advised others to hold on taking moderator actions because I know adapting to and interfacing with a community and that this process can often be bumpy- we wish to give people good faith when it is deserved, but that is predicated on a willingness to engage in good faith with the community. If that is not how you wish to interact with social media, that is your decision and we will respect it, but this is not a place where we allow that kind of behavior.
Extremely based, good job FTC
Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.
I’m not sure if you recognize this, but this is precisely how mentalism, psychics, and others in similar fields have always existed! Look no further than Pliny the elder or Rasputin for folks who made a career out of magical and mystical explanations for everything and gained great status for it. ChatGPT is in many ways the modern version of these individuals, gaining status for having answers to everything which seem plausible enough.
I use it/its in spaces where I do not plan on engaging with people as individuals
I see your pronouns on Beehaw are it/its, can you clarify whether you intend on engaging with people as individuals and if not how that shapes how you treat them?
This isn’t just about GPT, of note in the article, one example:
In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.