Tim Webb - Agent Of Change - Real Estate

2 Followers

I started my Real Estate marketing and sales career January of 2014, and it became clear to me very quickly that real estate agents don’t want to go the extra mile in terms of marketing creativity which leads to many more people stumbling upon their clients’ properties, because the property is being showcased, rather than just shown as the usual listing, which rarely tells any story of significance, nor has anyone talking about Agent5’s listings on social media or mainstream media where it’s presented and positioned as news or infotainment. Throughout my career, I looked to tell stories about my client’s properties along with creating edgey marketing campaigns that got up to 10x the usual engagement, which brought many more buyers to the open homes, and created LOTS more competition to own the property, on Auction Day. This is a collection of some of my best Real Estate marketing videos, many of which attracted a viral audience and going on to sell, under the hammer for loads more $$$ than my clients were expecting ... I’ll be returning to the Real Estate industry soon and if you’d like to chat about how I can help you sell your home, regardless of where you live on the planet, please email me at tim@agentofchange.co.nz Here's A Collection Of My Past New Zealand Real Estate Marketing Videos That "Challenged" The NZ Real Estate Status Quo ... Enjoy. Tim

Users can generate videos up to 1080p resolution, up to 20 sec long, and in widescreen, vertical or square aspect ratios. You can bring your own assets to extend, remix, and blend, or generate entirely new content from text.

2 Followers

We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn. Fifteen years ago, Quiroga et al.1 discovered that the human brain possesses multimodal neurons. These neurons respond to clusters of abstract concepts centered around a common high-level theme, rather than any specific visual feature. The most famous of these was the “Halle Berry” neuron, a neuron featured in both Scientific American⁠(opens in a new window) and The New York Times⁠(opens in a new window), that responds to photographs, sketches, and the text “Halle Berry” (but not other names). Two months ago, OpenAI announced CLIP⁠, a general-purpose vision system that matches the performance of a ResNet-50,2 but outperforms existing vision systems on some of the most challenging datasets. Each of these challenge datasets, ObjectNet, ImageNet Rendition, and ImageNet Sketch, stress tests the model’s robustness to not recognizing not just simple distortions or changes in lighting or pose, but also to complete abstraction and reconstruction—sketches, cartoons, and even statues of the objects. Now, we’re releasing our discovery of the presence of multimodal neurons in CLIP. One such neuron, for example, is a “Spider-Man” neuron (bearing a remarkable resemblance to the “Halle Berry” neuron) that responds to an image of a spider, an image of the text “spider,” and the comic book character “Spider-Man” either in costume or illustrated. Our discovery of multimodal neurons in CLIP gives us a clue as to what may be a common mechanism of both synthetic and natural vision systems—abstraction. We discover that the highest layers of CLIP organize images as a loose semantic collection of ideas, providing a simple explanation for both the model’s versatility and the representation’s compactness.