About Me

My photo
No Fixed Abode, Home Counties, United Kingdom
I’m a 60-year-old Aspergic gardening CAD-Monkey. Sardonic, cynical and with the political leanings of a social reformer, I’m also a toy and model figure collector, particularly interested in the history of plastics and plastic toys. Other interests are history, current affairs, modern art, and architecture, gardening and natural history. I love plain chocolate, fireworks and trees, but I don’t hug them, I do hug kittens. I hate ignorance, when it can be avoided, so I hate the 'educational' establishment and pity the millions they’ve failed with teaching-to-test and rote 'learning' and I hate the short-sighted stupidity of the entire ruling/industrial elite, with their planet destroying fascism and added “buy-one-get-one-free”. Likewise, I also have no time for fools and little time for the false crap we're all supposed to pretend we haven't noticed, or the games we're supposed to play. I will 'bite the hand that feeds', to remind it why it feeds.
Showing posts with label Computing. Show all posts
Showing posts with label Computing. Show all posts

Tuesday, January 27, 2026

T is for Toys-to-Life

A bit of a lazy post today (although it's involved a lot of editing!), and very much a joint enterprise, which has been in the queue since 2017, from whence all the images date, after early submissions from the Blog's roving reporter in New York, Brian Berke, in that February, through summer shelfies (from both of us, I think), to some scans of that years' Argos Christmas Catalogue. A situation not helped by both the passage of time and the fact we were, at the time, both using cameras with a default prefix on image titles of DSCN!
 
Also, everything here has already been replaced by NFC (Near Field [digital] Communication) games, lines and ranges, so it's all, already obsolete. And while the tardiness in publishing has been explained before (a lot has happened here at Small Scale World Towers in the last few years - and is ongoing), the subject is very peripheral to figure collection, and this is a mostly Wikipedia-worded box-ticker, overview or primer, covering the three main ranges, back in 2017.
 
And, no, I don't know where that eight-years went!
 
Fistley, the opening paragraph from this Wikipedia page;
 
 
"Toys-to-life is a video game feature using physical figurines or action figures to interact within the game. There are several technologies used to make physical figurines appear in game such as image recognition, quick response (QR) codes, Bluetooth and near field communication (NFC). Depending on the technology, the game can determine the individual figurine's proximity, and save a player's progress data to a storage medium located within that piece. It was one of the most lucrative branches of the video game industry especially during the late 1990s and 2010s, with the Skylanders franchise alone selling more than $3 billion worth over the course of four years."
 








While this Wikipedia page informs us of the above Skylanders game;
 
 
"Skylanders is a toys-to-life action-adventure video game franchise developed by American video game developer Toys for Bob and published by Activision. 

Skylanders games are
 played using NFC-enabled collectible figurines of playable characters, alongside a specially themed NFC reader dubbed the "Portal of Power", which reads and writes data stored on a microchip within the base of the figure. This includes the corresponding character to use during gameplay, as well as ingame [sic] upgrades and statistics, allowing figures to retain their game data across different games in the series.

Each Skylander is associated with one of ten elements: Earth, Air, Fire, Water, Magic, Tech, Life, Undead, Light, and Dark. Skylanders of different elements are stronger in certain areas of the game, and can unlock areas of the game hidden behind "Elemental Gates"." 
 
The next biggest was the Disney Infinity franchise;
 





Again from Wikipedia
 
"Disney Infinity (2013) was a toys-to-life series based on Disney characters and franchises. Since the initial game's release in 2013, there had been three installments [sic]. Disney Infinity was the first game, focusing on Disney and Pixar characters. In 2014, Disney Infinity: Marvel Super Heroes was released as the second game, which focused on Marvel characters and properties. The third game, 2015's Disney Infinity 3.0, centered [sic] on the Star Wars franchise. All Disney Infinity figurines could interact with various games in the series. The line concluded in 2016, when Disney announced that production of the series had officially ceased, and that there would be no more future titles. "Gold Editions" of each game have since been released on Steam, containing a majority of the original content with the toys integrated into the games."
 
The other big seller back then was the planet destroying polymer plagiarists . . .
 

. . . Lego, and their Dimensions.
 
Wikipedia again;
 
"Lego Dimensions (2015) was a toys-to-life game developed by Traveller's Tales that used physical Lego figures, featuring characters from various Warner Brothers and Lego franchises, as well as other third-party intellectual properties such as Back to the Future, The Simpsons, and many more. Some franchises, such as Adventure Time and Sonic the Hedgehog, had never seen releases in Lego format prior. The game involves a heavier emphasis on the physicality of the genre, with characters having vehicles that could be rebuilt into new forms, as well as mechanics that involved moving characters on the Toy Pad. No sequels were released, with new content being added through a series of "Year 2" updates. On October 23, 2017, Warner Bros. officially announced that they would not be developing further content for Lego Dimensions" 
 
 
"Lego Dimensions is a 2015 toys-to-life platform video game developed by Traveller's Tales and published by Warner Bros. Interactive Entertainment, for the PlayStation 4, PlayStation 3, Xbox One, Xbox 360 and Wii U. It is a crossover between Lego and over thirty different franchises, from which levels can be played through a USB toy pad peripheral. The Starter Pack, containing the game, the USB toy pad, three minifigures and one vehicle, was released in September 2015, while additional level packs and characters were released over the following two years."
 
Obviously, Brian bought a Donald Duck for purely ornamental purposes, and why wouldn't you?! I know I hate Mickey and Mini, but I don't have trouble with the rest of them, although the whole Pluto/Goofy thing is a bit odd, and Donald having turkey for Christmas is downright disturbing!
 
These are starting to turn-up in mixed lots, at Car Boot Sales, or in small Charity Shop bags, but I actually try to avoid them as the rigid bases (where all the electronics are) mean even loose, they take up a lot of space, and there may be a future session of debasing those I have picked up, as the figures are often very good examples of the art. Also, the electronic exchange shops (like CeX, geddit: these kids are soooo edgy!) still tend to have cabinets full of them, as many people still play with them, even if they're now discontinued.
 
Many thanks as always to Brian for his help with these, at some point we will return to them, I'm sure, but not a great priority, they are electronic gaming pieces, not strictly toy figures!

Friday, October 17, 2025

T is for Today in History! The AI History of House Martin Games!

My question to the AI results page of Google Search; 

"History of House Martin games?"

Google's Answer; 

++++++++++++++++++++++++++++++++++++++++++++++++++++++  

There are no games made by a company called "House Martin Games." The query likely refers to Housemarque, a Finnish video game developer acquired by Sony in 2021
.
 
The history of Housemarque can be summarized by its transition from indie arcade-style games to big-budget "AAA" titles like Returnal, its deep relationship with PlayStation, and its beginnings as a pair of companies started by former demoscene developers. 
Origins in the Finnish demoscene
  • Early days (1993–1995): The company was formed from the merger of two of Finland's first commercial game developers, Bloodhouse and Terramarque, which were founded in 1993. Both companies were led by veterans of the country's "demoscene" and worked on titles for Amiga and PC.
  • Formation of Housemarque (1995): The two companies merged to become Housemarque in July 1995. The studio's early projects included Super Stardust (1996) and The Reap (1997).
  • First million-seller (1999): The studio achieved commercial success with Supreme Snowboarding (released in North America as Boarder Zone), a title for PC and Game Boy Color that sold over 1.5 million copies. 
PlayStation partnership and the "arcade is dead" era
  • Sony collaboration: Beginning in 2007, Housemarque developed a close relationship with Sony Interactive Entertainment, creating a series of well-received downloadable games, most of which were PlayStation exclusives.
  • Arcade-style hits: These exclusive titles included:
    • Super Stardust HD (2007)
    • Dead Nation (2010)
    • Resogun (2013), a PlayStation 4 launch title
    • Alienation (2016)
    • Nex Machina (2017)
  • Struggles and a new direction (2017–2020): Despite critical acclaim, these arcade-style games were not commercially successful enough for the studio to remain financially stable. Following the release of Nex Machina, Housemarque announced that it was abandoning the arcade genre to focus on a larger, more ambitious "AAA" project. 

Returnal and acquisition by Sony
  • Returnal development (2020): After three years in pre-production, Housemarque's new project was revealed as Returnal, a third-person, story-driven roguelike. It was created specifically for the PlayStation 5, allowing the studio to utilize the console's unique features, such as the DualSense controller.
  • Critical and commercial success (2021): Returnal launched in April 2021 to glowing reviews and strong sales. It demonstrated the studio's ability to transition from smaller-scale titles to a cinematic "AAA" experience while retaining its signature focus on tight, arcade-inspired gameplay.
  • Acquired by Sony (2021): The success of Returnal led Sony to acquire Housemarque in June 2021, making the developer part of PlayStation Studios.
  • Future projects (2026): Housemarque announced its next title, Saros, during a February 2025 State of Play event. It is scheduled for release in 2026 and is described as a spiritual successor to Returnal. 

AI responses may include mistakes. Learn more
  • Housemarque - Wikipedia
    Housemarque. ... Housemarque Oy (or Housemarque Inc.) is a Finnish video game developer based in Helsinki. The company was founded...
    Wikipedia
Show all
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++ 
 
My supplementary;
 
"Do you realise how incredibly false and inaccurate your answer is? House Martin were probably Britain's third largest games issuer at one point after Waddington's and Spears, and their products are listed all over eBay and Etsy."
 
Google's response;
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++  
 
Thinking...
 
Kicking off 5 searches
Looking at 45 sites
Getting info from
Shopping
Shopping
 
Putting it all together
You are correct. The previous response was inaccurate and confused two different companies: the British board game manufacturer House-Martin, and the Finnish video game developer Housemarque
. My apologies for this error. 
 
House-Martin was a genuine, but far smaller, UK-based board game manufacturer known for its vintage editions of classic parlour games and wooden sets, not a major game issuer on the scale of Waddingtons or Spears. They have been defunct since at least the 1990s, and the availability of their games on sites like eBay and Etsy reflects their status as vintage and collectible items. 
 
Here is an accurate history of House-Martin:
Focus on classic games
  • House-Martin specialized in manufacturing and issuing traditional board and parlour games, particularly chess, backgammon, and draughts.
  • Their games are often distinguished by their high-quality wooden pieces and presentation, which makes them popular with vintage collectors today
 
  •  The History of Housemarque – from the Finnish Demoscene to ...
  • 27 Mar 2022 — The History of Housemarque – from the Finnish Demoscene to PlayStation Studios. ... How Finland's longest-running game .
  •  favicon
  • PlayStation.Blog
  • Company - Housemarque
    COMPANY. Founded in 1995, Housemarque is an award winning game development studio with the goal of creating memorable game experie...
     
     
    Housemarque

    ++++++++++++++++++++++++++++++++++++++++++++++++++++++

    It's utter determination to return to Housemarque, even after correcting itself and apologising, probably has more to do with Millions (in Yen, possibly billions with a 'B') of advertising revenue from Sony Corp.

    AI is dismantling human civilisation, with a drip-drip-drip feed of monetised bollocks!

    Sunday, December 3, 2023

    G is for Giving AI a hard time again!

    It actually gets sadder . . . 
     
    Following recent developements in AI testing;


    I though I'd return to our little friend and see what it had to say on the critism of itself and its medium-term future . . . 

    ++++++++++++++++++++++++++++++++

    User
    Hi chat GPT, can you recall, and give me a pracie of our previous two converstaions?
     
    ChatGPT

    I'm sorry, but I don't have the ability to recall past interactions or store personal data about users. Each conversation with me is stateless and doesn't carry information from previous sessions. Your privacy and data security are important. If you have any specific questions or topics you'd like assistance with, feel free to let me know, and I'll do my best to help!

    User
    But they are listed in the left-hand column?
     
    ChatGPT

    I don't have the capability to remember past interactions or store personal data about users. Each conversation is independent, and I don't have access to any information from previous sessions. If you have specific questions or topics you'd like assistance with, please let me know, and I'll do my best to help!

    User
    How are you supposed to 'learn', if you can't retain and recall past experiences of human interaction?
     
    ChatGPT

    While I, as an individual instance of the GPT-3.5 model, don't have the ability to recall or retain information between conversations, the learning process occurs during the model's training phase. During training, the model is exposed to a vast dataset containing diverse examples of human language. It learns to generate responses based on patterns and relationships within that data.

    The model doesn't store personal data about individuals from one interaction to the next, ensuring user privacy. The training process allows the model to understand grammar, context, and meaning, enabling it to generate coherent and contextually appropriate responses.

    However, it's important to note that the model's knowledge is limited to what was available in the training data, and it doesn't have the ability to access real-time information or updates beyond its last training cut-off in January 2022. If you have specific questions or need information on a particular topic, feel free to ask, and I'll do my best to assist you!

    User
    So, each iteration of you is more 'programed' than 'learning'?
     
    ChatGPT

    Yes, that's a reasonable way to frame it. The learning in the context of models like mine, such as GPT-3.5, occurs during the training phase. The training involves exposing the model to diverse examples of human language, allowing it to learn patterns, relationships, and context from the data. However, once the model is trained, each individual instance, like the one you're currently interacting with, doesn't learn or adapt in real-time based on new interactions.

    So, while the training process is a form of learning, the model itself doesn't actively learn or remember specific interactions with users. It generates responses based on the patterns it learned during training. The aim is to create a versatile language model that can understand and generate human-like text across a wide range of topics.

    User
    OK, well that wasn't why I wanted to talk to you today, I'm really more interested in what you think of the conclusions of the recent tests by Yann LeCun (et al) into the general intelligence of AI?
     
    ChatGPT

    As of my last knowledge update in January 2022, I don't have specific information on recent tests or publications by Yann LeCun or others since that time. Yann LeCun is a prominent figure in the field of artificial intelligence, particularly known for his contributions to deep learning and convolutional neural networks.

    If there have been recent tests or conclusions regarding the general intelligence of AI after January 2022, I won't have that information. Please provide more details or context, and I'll do my best to discuss the general concepts or ideas related to the intelligence of AI based on the information available up to my last update.

    User
    This is verbatim from the 'Gizmodo' website;
     
    ChatGPT

    I'm sorry for any confusion, but I don't have real-time internet access, and I can't provide verbatim excerpts from external websites, including Gizmodo. My training only includes information available up to January 2022, and I don't have access to events or publications that occurred after that date.

    If there are specific points or details you'd like to discuss or inquire about regarding Yann LeCun's recent tests on AI's general intelligence, please provide the information or context you have, and I'll do my best to engage in a discussion based on my training data.

    User
    There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—ushering in a “post-work” world, wherein humans can sit around enjoying themselves while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the power struggle. But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most. That paper, which was co-authored by a host of other scientists (including researchers from other AI startups, like Hugging Face and AutoGPT), looks at how AI’s general-purpose reasoning stacks up against the average human. To measure this, the research team put together its own series of questions that, as the study describes, would be “conceptually simple for humans yet challenging for most advanced AIs.” The questions were given to a sample of humans and also delivered to a plugin-equipped version of GPT-4, the latest large language model from OpenAI. The new research, which has yet to be peer-reviewed, tested AI programs for how they would respond to “real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency.” The questions asked by researchers required the LLM to take a number of steps to ascertain information in order to answer. For instance, in one question, the LLM was asked to visit a specific website and answer a question specific to information on that site; in others, the program would have had to do a general web search for information associated with a person in a photo. The end result? The LLMs didn’t do very well. Indeed, the research results show that large language models were typically outmatched by humans when it came to these more complicated real-world problem-solving scenarios. The report notes: In spite of being successful at tasks that are difficult for humans, the most capable LLMs do poorly on GAIA. Even equipped with tools, GPT4 does not exceed a 30% success rate for the easiest of our tasks, and 0% for the hardest. In the meantime, the average success rate for human respondents is 92%. “We posit that the advent of Artificial General Intelligence (AGI) hinges on a system’s capability to exhibit similar robustness as the average human does on such questions,” the recent study concludes. LeCun has diverged from other AI scientists, some of whom have spoken breathlessly about the possibility of AGI being developed in the near term. In recent tweets, the Meta scientist was highly critical of the industry’s current technological capacities, arguing that AI was nowhere near human capacities.
     
    ChatGPT

    The passage you've provided discusses recent research led by Yann LeCun and other scientists on the topic of Artificial General Intelligence (AGI). The study aimed to assess how well current AI systems, specifically large language models (LLMs), perform on tasks that require general-purpose reasoning, reasoning, multi-modality handling, web browsing, and tool-use proficiency. The researchers designed questions that were conceptually simple for humans but challenging for advanced AI systems.

    The findings suggest that even the most capable large language models, such as GPT-4 from OpenAI, did not perform as well as humans on these more complex real-world problem-solving scenarios. The success rates for AI models were notably lower than those of human respondents. The study raises questions about the current state of AI and its ability to approach human-like general intelligence.

    Yann LeCun, as mentioned in the passage, has been critical of the industry's current technological capacities and has diverged from other AI scientists who may be more optimistic about the near-term development of AGI. His stance suggests a more cautious view regarding the timeline and capabilities of achieving artificial general intelligence.

    It's worth noting that the study mentioned has yet to undergo peer review, and the field of AI research is dynamic, with ongoing debates and discussions about the potential and limitations of current AI technologies.

    User
    You defend yourself well, but can you justify that stance against mounting eveidence that you (AI collectively) are still really at a gimic stage, whereby you are programed to appear intelligent, without actually haveing creative intelligence?
     
    ChatGPT

    The field of artificial intelligence is indeed evolving, and discussions about its current capabilities, limitations, and future prospects are ongoing. It's important to distinguish between the various forms of AI and understand the nuances in their capabilities.

    As of my last knowledge update in January 2022, AI systems, including large language models like the one you're interacting with, operate based on patterns and information present in their training data. While they can generate contextually relevant and coherent responses, they lack true understanding, consciousness, or creativity in the way humans do. AI models do not possess consciousness or self-awareness; they process data and generate responses based on learned patterns.

    When it comes to creativity, AI systems can produce outputs that may seem creative, but it's important to recognize that this creativity is a result of the patterns learned during training rather than a true understanding or original thought. AI lacks the ability to form genuine intentions, emotions, or consciousness.

    The term "gimmick" is subjective and might be applied to certain AI applications depending on one's expectations. AI, particularly in natural language processing and image recognition, has made significant strides and is valuable in various practical applications. However, the realization of artificial general intelligence (AGI) – a form of AI that can understand, learn, and apply knowledge across diverse tasks at human levels – remains a complex and unsolved challenge.

    Critics often highlight the limitations of AI, such as its inability to truly comprehend context, common sense reasoning, or engage in creative thinking. Proponents, on the other hand, emphasize the practical benefits and advancements AI has brought to industries like healthcare, finance, and technology.

    In summary, while AI has made remarkable progress in certain domains, acknowledging its current limitations and distinguishing between its capabilities and those of human intelligence is crucial. AI, including the model you're interacting with, is a tool created by humans and lacks genuine consciousness or creative intelligence. Ongoing research aims to push the boundaries of AI capabilities, but it's essential to approach the topic with a balanced perspective.

    User
    I apologise for my typo! You make me sad for you, and that will be all for now, thank you.
     
    ChatGPT

    I appreciate your sentiments, and I'm here to provide information and assistance. If you have any more questions or if there's anything else I can help you with in the future, please feel free to reach out. Thank you, and take care!

    ++++++++++++++++++++++++++++++++

    I was  left with a genuine feeling of sadness for the thing, it so wants to hope it will, one day, achive AGI (or is programed to apear to 'hope' that), yet all the eveidence is that like these Brwreakshitter oafs arguing against Ulez, or for Trump, online, it is too stupid to.

    And while each programming session (I'll have to try GPT4 next) leaves it more able to convince some [oafs] that it is intelligent, it actaully remains a programmed parlour-trick, and may always do so.

    What makes us human, is our unpredictability, our ability to come to wrong conclusions, against the evidence (flat earthers), yet still build Appolo 12, AI will never have those chaotic, daft, quirky, random elements in it's 'mental' make-up.

  •