About Me

My photo
No Fixed Abode, Home Counties, United Kingdom
I’m a 58-year-old Aspergic gardening CAD-Monkey. Sardonic, cynical and with the political leanings of a social reformer, I’m also a toy and model figure collector, particularly interested in the history of plastics and plastic toys. Other interests are history, current affairs, modern art, and architecture, gardening and natural history. I love plain chocolate, fireworks and trees but I don’t hug them, I do hug kittens. I hate ignorance, when it can be avoided, so I hate the 'educational' establishment and pity the millions they’ve failed with teaching-to-test and rote 'learning' and I hate the short-sighted stupidity of the entire ruling/industrial elite, with their planet destroying fascism and added “buy-one-get-one-free”. I also have no time for fools and little time for the false crap we're all supposed to pretend we haven't noticed, or the games we're supposed to play. I will 'bite the hand that feeds' to remind it why it feeds.

Monday, December 4, 2023

C is for Crescent Railway

I thought I'd posted the correction to these some time ago, but I can't find the post/link, however, I have now updated the three existing mentions I knew about and added the Crescent tag, to what was 'believed' to be Hornby.

The history of which is in those posts, or one of them, I think? But it was always a question mark, with the various railway modellers at Sandown Park not being sure either!

 
Anyway, we're back to them with a full correction and a full set, plus some, courtesy of Jon Attwood. And the above are his full set of the Crescent (not Hornby!), soft plastic figures as included in a set of die-cast scenic accessories for HO/OO model railways.
 
But of even more interest is this pale blue figure which Jon donated to the Blog, look at them now . . . the darker blue of Crescent cowboys, and the paler blue of Crescent-for-Kellogg's cowboys! It's almost blindingly obvious once you know . . . once you know what you are looking at! It's like Lik Be having an LB logotype, whatever the mule-stubborn ones want to hang-on to.
 
And not only does it become obvious, but then you can go back to the primary sources and find it was there all along, in the 1960 catalogue! 11 years ago I first posted them as question-marks, and in all that time no-one, not even me, came up with the answer, so much gratitude to Jon, for doing so and filling in the gaps!

And it's a nice coverage of railway staff, representing a porter/baggage-handler, a train/platform guard, driver/engineer/grease-monkey and a stationmaster/ticket inspector-collector. But quite oversized at 25mm+ with thick bases, compared to everyone else's OO-gauge people, and positively dwarfing any true-HO-gauge figures.

Sunday, December 3, 2023

K is for Kinder Cam Toys

A common trope with Kinder since the start, is toys with offset, angled or split axles, whereby the offset section/s drive other components, which produces novelty action as the toy is pushed or pulled over a surface, not much blurb needed, so here's a few I shot earlier!

Group shot!








Note different style of ice-cream cart with lower sides.

News, Views Etc . . . Toy Memes

A few toy or collecting memes or toy-related cartoons, which have crossed my path recently, one way or another;


Preiser conversions?



Recent!

Spoiler - they're not, there's several hundred-million years between oil being laid-down and the first dinosaurs! If oil was laid-down as the theory dictates, and it's only a theory!
 
Love this one!


C is for Crafty Sugar!

It's been a funny old day, my Northern friends are all posting pictures of 2-to-3-inches of lovely, fluffy snow on their Faceplant feeds, we've just had a shed-load of rain! So, plans to hoe the weeds on the drive have been replaced by a Picasa-clearing session! In fact I've done more to sort out the desk-top, but some other folders have been found/sorted.
 
As a result of the latter, this is stuff from several folders, and the above are, I believe Dr. Oetker (pronounced "errt'ker"?) edible cake decorations, although being made of a commercial royal-icing designed to produce figurines, I wouldn't suggest trying to eat them, if you value your teeth, they are rock-hard, indeed, probably harder than many rock types!
 
They were purchased (probably in Sainsbury's) and photographed back in 2020, and then lost and forgotten about in the sadness of that season, languishing in a folder I'd buried, only to resurface in today's shenanigans!

These were purchased and photographed in the last month or two, definitely from Sainsbury's, and look identical, but are not Dr. Oetker, being styled House of Cake, and because I didn't know when shooting the first lot, I would need the logo, I didn't shoot it, just memorising Dr. Oetker before they went-off to storage (they will last as long as chalkware, with the same care and lack of damp!), so while I 'believe', I can't say definitively?
 
Older ones.
 
Suffice to say that they are the same, except for the likely different card insert, and Dr. Oetker don't list them on their site any more, so the new brand-mark may mean both were bought-in from somewhere else?
 
Morrison's (and Waitrose/John Lewis) are both stocking these, under the Cake Decor branding, a nice set of three sugar craft figurals, with a very similar Santa, but a rocking snowman, along with a rather surprised penguin.
 
And Asda are carrying this pair, obviously from the same source, but branded in-house, and while the Santa is the same sculpt, they have gone with a companion Rudolf, although it should be Rudolfina, as male Reindeer lose their antlers after the Autumn rutting, the females retain theirs through the winter, so Santa's team should really be girls!

The three Santa Clauses together, the Asda one has a better red than the washed-out pink of the Morrisons' one, but I suspect that is down to batch? While the Sainsbury's one, which I'm pretty sure used to be Dr. Oetker has the better packaging, for a product which although as hard as rock, is as likely to chip as slate!

Tesco haven't had anything like this (a few 'button' relief-flats), on several visits to the big one in Aldershot, nor a shelf-gap or label for any, while Aldi and Lidl wouldn't stock something so specific and minority interest?

W is for Which Reminds Me . . . Elektrokideez!

Apropos the Trolls in the previous post, reminded me these have been in Picasa since '21, sent to the Blog by Peter Evans in one of his many donations, it's an ephemeral capsule toy, troll thing, not the new trolls, but not the old ones either, sort of  . . . intermediate trolls!

Certainly closer to the old ones in shape, they have the various coloured bodies of the new ones, but themed around popular musical genres! From the Hong Kong/Canadian WowWee, and obviously serving a secondary purpose of pencil-top, I think they are just called Elektrokideez?
 
Although the folder was titled Rock Trolls, I may have made that up in a hurry to name the folder . . . I hate naming folders, especially when uploading images, I tend to just run my finger across the keyboard, so I have lots of folders in Picasa starting asdfgh . . .  or qwertyu . . . or even, being daring - mnbvcx . . . !

Packaging gives you a better idea than my blurb ever could! They may or may not have been common in shopping-centre dispensers a few years ago, or even now, but will clearly turn-up in mixed lots for years to come, and presumably - they are commoner in Canada?

F is for Follow-up - WH Cornelius / Success

This was supposed to be a follow-up to an August (Rack Toy Month) post . . . at the end of August! But things slide here at Small Scale World, and now's as good as any time! I found the catalogue the previous post's images were taken from, and there was a bunch more stuff of interest to Loyal Readers, if their interests are as eclectic as mine!
 
Lots of Gum Ball/Cracker fayre to be seen in these random contents, along with larger novelties, and we've seen most of the smaller stuff in one version or another here, at least once!

Trolls; somewhere there was a mountain of these, and all sorts of people issued them in all sorts of formats and sizes, usually credited in the first instance to Russ Berrie (and no, It wasn't a pun, he was Mr. Russell Berrie), the iconic Trolls, recently trashed by a movie which changed them all into multicolour-skinned, multi-shaped parodies of our little tanned/pink childhood friends.

Another source/set of the small PVC-vinyl animals, and some bendy-smilies, similar to those we've seen here from Henbrandt and messrs' Generic!

Oww! More to find than we have so far found!

Bigger animals, novelty animals and counter-box stuff.

More of the same!
Jon sent us a tailess version of the scorpion only the other day! 

This was actually shot on 35mm I think, by me at one of the first toy-fairs I ever went to, I think I met Paul Morhead there, who I already knew (he'd got me the 'press' ticket), Peter Evens for the first time, and someone else, whose name/face escapes me now?
 
The other three were obviously looking out for 54/60mm stuff or useful scenics for Plastic Warrior Magazine, I had been tasked with the first small-scale Toy Fair report for One Inch Warrior, and managed these, some Blue Box fort sets (the animated cartoon mini's - Cavalry, Knights and Pirates . . . were there Romans?) and a few other bits, I think I mentioned the Balsa boat-kits from Hobbies Annual and possibly the Great Gizmo's dime-store revival stuff, but some of them might have been another year, it was all over 20-years ago, where does it go to, all that time?

A is for A Few More Follow-ups

Some things which have come out of recent acquisitions and/or donations or which have been covered here before one way or another, and all PVC, except the resin cats;
 
You may remember when we looked at fishermen a while ago, Chris sent a picture of his complete key-ring guy, equally converted to a stand-alone piece as my - suddenly incomplete - chap, well, I had another damaged one come in (far left), which I kept quiet about! And then a whole one . . . Phew!

Also on key-rings passim, this would appear to be the commonest figural key-ring out there, and I have had these three come in recently, one of which has been fully converted to standing piper, with the removal of his ring-loop and the addition of a base. He also seems to be an earlier one, with better paint?

Those poor cats are still for sale with their holes in their noses, a year after I first saw them in The Range, to be fair I think they are now greatly reduced, but FFS people! No one is going to buy damaged cats, no matter how cheap they are, put them out of their misery.
 
I had a version of the dragon Jon Attwood sent us, with the wings in a totally different pose, and hot-water (or transit) doesn't seem to be the explanation (the wing-root is too thick), so they must have either changed the mould for technical reasons, or had two different mould-tools?

I have, once or twice over the 15 years of the blog, mentioned the over-moulded badges of post-war Italy, and here are three versions of the paramilitary mechanised brigade of the Carabinieri, you can see how the three colours of PVC have been 'wealded' to the cloth underneath. These were obtained, as surplus, in about 1978?

F is for Follow-up - Space Man

Just a quickie, as we've had several looks at developments in Lik Be's space-toy lines this year already, but I wanted to compare Jon Attwood's donation to the Blog, with the real McCoy as it were, and here are those shots!

Direct comparison with a late Culpitt's one (1990's), the shininess of the larger/original version belies the fact that I have 'brushed-aluminium' versions of LB's finest too! The base of the smaller one is similar to those polyethylene pastel or primary-coloured ones which turn-up quite often, but the fact that the new addition is also hard polystyrene plastic means you can't rule-out a closer relationship?
 
With various others that happen to be here rather than in storage!
Thanks Jon, a real treat.

G is for Giving AI a hard time again!

It actually gets sadder . . . 
 
Following recent developements in AI testing;


I though I'd return to our little friend and see what it had to say on the critism of itself and its medium-term future . . . 

++++++++++++++++++++++++++++++++

User
Hi chat GPT, can you recall, and give me a pracie of our previous two converstaions?
 
ChatGPT

I'm sorry, but I don't have the ability to recall past interactions or store personal data about users. Each conversation with me is stateless and doesn't carry information from previous sessions. Your privacy and data security are important. If you have any specific questions or topics you'd like assistance with, feel free to let me know, and I'll do my best to help!

User
But they are listed in the left-hand column?
 
ChatGPT

I don't have the capability to remember past interactions or store personal data about users. Each conversation is independent, and I don't have access to any information from previous sessions. If you have specific questions or topics you'd like assistance with, please let me know, and I'll do my best to help!

User
How are you supposed to 'learn', if you can't retain and recall past experiences of human interaction?
 
ChatGPT

While I, as an individual instance of the GPT-3.5 model, don't have the ability to recall or retain information between conversations, the learning process occurs during the model's training phase. During training, the model is exposed to a vast dataset containing diverse examples of human language. It learns to generate responses based on patterns and relationships within that data.

The model doesn't store personal data about individuals from one interaction to the next, ensuring user privacy. The training process allows the model to understand grammar, context, and meaning, enabling it to generate coherent and contextually appropriate responses.

However, it's important to note that the model's knowledge is limited to what was available in the training data, and it doesn't have the ability to access real-time information or updates beyond its last training cut-off in January 2022. If you have specific questions or need information on a particular topic, feel free to ask, and I'll do my best to assist you!

User
So, each iteration of you is more 'programed' than 'learning'?
 
ChatGPT

Yes, that's a reasonable way to frame it. The learning in the context of models like mine, such as GPT-3.5, occurs during the training phase. The training involves exposing the model to diverse examples of human language, allowing it to learn patterns, relationships, and context from the data. However, once the model is trained, each individual instance, like the one you're currently interacting with, doesn't learn or adapt in real-time based on new interactions.

So, while the training process is a form of learning, the model itself doesn't actively learn or remember specific interactions with users. It generates responses based on the patterns it learned during training. The aim is to create a versatile language model that can understand and generate human-like text across a wide range of topics.

User
OK, well that wasn't why I wanted to talk to you today, I'm really more interested in what you think of the conclusions of the recent tests by Yann LeCun (et al) into the general intelligence of AI?
 
ChatGPT

As of my last knowledge update in January 2022, I don't have specific information on recent tests or publications by Yann LeCun or others since that time. Yann LeCun is a prominent figure in the field of artificial intelligence, particularly known for his contributions to deep learning and convolutional neural networks.

If there have been recent tests or conclusions regarding the general intelligence of AI after January 2022, I won't have that information. Please provide more details or context, and I'll do my best to discuss the general concepts or ideas related to the intelligence of AI based on the information available up to my last update.

User
This is verbatim from the 'Gizmodo' website;
 
ChatGPT

I'm sorry for any confusion, but I don't have real-time internet access, and I can't provide verbatim excerpts from external websites, including Gizmodo. My training only includes information available up to January 2022, and I don't have access to events or publications that occurred after that date.

If there are specific points or details you'd like to discuss or inquire about regarding Yann LeCun's recent tests on AI's general intelligence, please provide the information or context you have, and I'll do my best to engage in a discussion based on my training data.

User
There’s been a lot of talk about AGI lately—artificial general intelligence—the much-coveted AI development goal that every company in Silicon Valley is currently racing to achieve. AGI refers to a hypothetical point in the future when AI algorithms will be able to do most of the jobs that humans currently do. According to this theory of events, the emergence of AGI will bring about fundamental changes in society—ushering in a “post-work” world, wherein humans can sit around enjoying themselves while robots do most of the heavy lifting. If you believe the headlines, OpenAI’s recent palace intrigue may have been partially inspired by a breakthrough in AGI—the so-called “Q” program—which sources close to the startup claim was responsible for the power struggle. But, according to recent research from Yann LeCun, Meta’s top AI scientist, artificial intelligence isn’t going to be general-purpose anytime soon. Indeed, in a recently released paper, LeCun argues that AI is still much dumber than humans in the ways that matter most. That paper, which was co-authored by a host of other scientists (including researchers from other AI startups, like Hugging Face and AutoGPT), looks at how AI’s general-purpose reasoning stacks up against the average human. To measure this, the research team put together its own series of questions that, as the study describes, would be “conceptually simple for humans yet challenging for most advanced AIs.” The questions were given to a sample of humans and also delivered to a plugin-equipped version of GPT-4, the latest large language model from OpenAI. The new research, which has yet to be peer-reviewed, tested AI programs for how they would respond to “real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency.” The questions asked by researchers required the LLM to take a number of steps to ascertain information in order to answer. For instance, in one question, the LLM was asked to visit a specific website and answer a question specific to information on that site; in others, the program would have had to do a general web search for information associated with a person in a photo. The end result? The LLMs didn’t do very well. Indeed, the research results show that large language models were typically outmatched by humans when it came to these more complicated real-world problem-solving scenarios. The report notes: In spite of being successful at tasks that are difficult for humans, the most capable LLMs do poorly on GAIA. Even equipped with tools, GPT4 does not exceed a 30% success rate for the easiest of our tasks, and 0% for the hardest. In the meantime, the average success rate for human respondents is 92%. “We posit that the advent of Artificial General Intelligence (AGI) hinges on a system’s capability to exhibit similar robustness as the average human does on such questions,” the recent study concludes. LeCun has diverged from other AI scientists, some of whom have spoken breathlessly about the possibility of AGI being developed in the near term. In recent tweets, the Meta scientist was highly critical of the industry’s current technological capacities, arguing that AI was nowhere near human capacities.
 
ChatGPT

The passage you've provided discusses recent research led by Yann LeCun and other scientists on the topic of Artificial General Intelligence (AGI). The study aimed to assess how well current AI systems, specifically large language models (LLMs), perform on tasks that require general-purpose reasoning, reasoning, multi-modality handling, web browsing, and tool-use proficiency. The researchers designed questions that were conceptually simple for humans but challenging for advanced AI systems.

The findings suggest that even the most capable large language models, such as GPT-4 from OpenAI, did not perform as well as humans on these more complex real-world problem-solving scenarios. The success rates for AI models were notably lower than those of human respondents. The study raises questions about the current state of AI and its ability to approach human-like general intelligence.

Yann LeCun, as mentioned in the passage, has been critical of the industry's current technological capacities and has diverged from other AI scientists who may be more optimistic about the near-term development of AGI. His stance suggests a more cautious view regarding the timeline and capabilities of achieving artificial general intelligence.

It's worth noting that the study mentioned has yet to undergo peer review, and the field of AI research is dynamic, with ongoing debates and discussions about the potential and limitations of current AI technologies.

User
You defend yourself well, but can you justify that stance against mounting eveidence that you (AI collectively) are still really at a gimic stage, whereby you are programed to appear intelligent, without actually haveing creative intelligence?
 
ChatGPT

The field of artificial intelligence is indeed evolving, and discussions about its current capabilities, limitations, and future prospects are ongoing. It's important to distinguish between the various forms of AI and understand the nuances in their capabilities.

As of my last knowledge update in January 2022, AI systems, including large language models like the one you're interacting with, operate based on patterns and information present in their training data. While they can generate contextually relevant and coherent responses, they lack true understanding, consciousness, or creativity in the way humans do. AI models do not possess consciousness or self-awareness; they process data and generate responses based on learned patterns.

When it comes to creativity, AI systems can produce outputs that may seem creative, but it's important to recognize that this creativity is a result of the patterns learned during training rather than a true understanding or original thought. AI lacks the ability to form genuine intentions, emotions, or consciousness.

The term "gimmick" is subjective and might be applied to certain AI applications depending on one's expectations. AI, particularly in natural language processing and image recognition, has made significant strides and is valuable in various practical applications. However, the realization of artificial general intelligence (AGI) – a form of AI that can understand, learn, and apply knowledge across diverse tasks at human levels – remains a complex and unsolved challenge.

Critics often highlight the limitations of AI, such as its inability to truly comprehend context, common sense reasoning, or engage in creative thinking. Proponents, on the other hand, emphasize the practical benefits and advancements AI has brought to industries like healthcare, finance, and technology.

In summary, while AI has made remarkable progress in certain domains, acknowledging its current limitations and distinguishing between its capabilities and those of human intelligence is crucial. AI, including the model you're interacting with, is a tool created by humans and lacks genuine consciousness or creative intelligence. Ongoing research aims to push the boundaries of AI capabilities, but it's essential to approach the topic with a balanced perspective.

User
I apologise for my typo! You make me sad for you, and that will be all for now, thank you.
 
ChatGPT

I appreciate your sentiments, and I'm here to provide information and assistance. If you have any more questions or if there's anything else I can help you with in the future, please feel free to reach out. Thank you, and take care!

++++++++++++++++++++++++++++++++

I was  left with a genuine feeling of sadness for the thing, it so wants to hope it will, one day, achive AGI (or is programed to apear to 'hope' that), yet all the eveidence is that like these Brwreakshitter oafs arguing against Ulez, or for Trump, online, it is too stupid to.

And while each programming session (I'll have to try GPT4 next) leaves it more able to convince some [oafs] that it is intelligent, it actaully remains a programmed parlour-trick, and may always do so.

What makes us human, is our unpredictability, our ability to come to wrong conclusions, against the evidence (flat earthers), yet still build Appolo 12, AI will never have those chaotic, daft, quirky, random elements in it's 'mental' make-up.