all 19 comments

[–]Mnemonic 5 insightful - 1 fun5 insightful - 0 fun6 insightful - 1 fun -  (18 children)

Brute force and specific search terms in the ad-words campaigns.

It all really depends what for and the context, but yeah this, keep it all relevant!

Also SSL certificates and relevant meta-tags help.

Oh and site-compression (look it up if you're using a cms, it's hidden somewhere) and minimizing overall loading times help.

[–]onemananswerfactory[S] 4 insightful - 1 fun4 insightful - 0 fun5 insightful - 1 fun -  (17 children)

A tight meta game on an optimized website is key for sure.

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (16 children)

Last I did something with it, load time might be even a bigger deal in the game for 'ranked the same at meta level', comprising the pages (and therefor decreasing the load time) improved 'my' (what i worked for) web-shop above competitors. [Though it was the Dutch market and pretty niche (no, not sexual) products].

'we' opted out of the ad-words and google-search campaigns and 'our' improving of meta game and (presumably) loadtime gave us more traffick than the ads without enhancements. (A lot of people know to ignore the search-engine ads when searching)

[–]onemananswerfactory[S] 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

Yea, my clients know that the top few results are usually ads, but not all the time. Educating them takes some time, but it's vital so they know what to look for when they constantly check to see if they've moved up the ranks.

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (0 children)

The hardest part is making them look at their own traffic (increase/decrease) depending on the strategy (and depending on their branch, should you check it over the week, month or quarterly).

I am pretty out of the game, but my experience is the greyed out text under the search result changes a lot. Now this was a webshop cms (OLD) had to be altered to accommodate to that. (I believe that's a specific meta-data you can input upon specific articles).

Our search results were [Title: Article name] {greed out: specific information that was relevant to buyers (expert buyers)}

For example a stone:

[Brick: Grey stone, baked]

(URL)

{perfect for building a wall}

Depending on the content the {} values are more reasuring for Buyers before clicking and checking it out, This is my experience in a niche market in The Netherlands (not in stone).

I don't have any Idea what those meta-tags were anymore (i really believe it was some meta-thingie) that secured that greyed-out area, but I'm sure you can find that out ;)

EDIT: By {} info I mean, specific technical info for the experts, so not 'great for keeping out mexicans' but something like {bla bla bla density, morter used, something something techical stone related}. AKA, info for the nerds!

[–]JasonCarswell 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (13 children)

ping /u/magnora7 /u/d3rr

I don't know if this is related or not to SEO.

I have found the default search on SaitIt only searches post titles. Google and DDG are no help either for searching the comments.

I know from almost 20 years ago you can set up your site to allow or deny their spiderbots, but is it also possible to flip it so they don't come to you?

What if after every post and comment SaidIt pinged Google and DDG to come look? Obviously that might be overkill so perhaps a script that kicks in after X days of inactivity on a post then sends an invite to be crawled by their spiders.

I don't know if that's a thing but IMHO it should be.

There is another issue, and I don't know if it really exacerbates things but I doubt it's helpful beyond the very practical search capability. And that is about SaidIt's content. If anyone were to analyse all the posts they'd certainly gain an understanding about what SaidIt is all about. But most posts are impersonal headline titles. If anyone were to analyse the content of the comments I doubt their understanding of what SaidIt is all about would be much different. But it may be. They'd know we're even more red-pilled than just some news aggregation. They'd know about us individually as participating characters with personalities. There's nothing to stop Google or the NSA from already doing this, but it's something to consider.

Especially when YouTube videos get pulled for one person in the comments mentioning "Jews" or something. Or so I've been told.

[–]d3rr 4 insightful - 1 fun4 insightful - 0 fun5 insightful - 1 fun -  (10 children)

We invite Google and whoever to come cache our site, so that's too bad if they don't have the site indexed. There's no way that I know of to tell Google hey come crawl this page.

What we need to do is work on a legit sitemap so that search engines get an idea of our organization/subs and are more encouraged to crawl the whole site.

Yes, search is post tiles only just like on Reddit. People have long complained about this. I'd be willing to put some effort into this, I think comments could be searchable too. Lots of work though.

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (8 children)

A warning (sort of): When saidit grows it might take up too much space/time to search the comments.

As I (And BEWARE, just me) see it, it;s about the posting to other sites mostly and when someone thinks 'hey that's for me' they will see he comments. For example having the ability to search the comments for 'the' is a waste of computational power(which includes time).

A sitemap for subs, a thing to consider, perhaps an update once a month (or every 2 weeks, that's the time limit right? maybe one week, every sunday).

EDIT: maybe: if you are on ones profile, serch only their comments for terms, take up a lot less time and might be just what people want.

[–]d3rr 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (7 children)

Yeah good warning man, you're right. At our current volume we could handle comment searching but at Reddit's volume it becomes a real technical challenge that might not be worth all of the effort. I'd rather put the effort into something like being able to save more than 1000 things, and then being able to search your saved things.

Yeah being able to search all of someone's comments would be cool too and make for a quicker search, but that still means search indexing every single comment on the site.

It's all about what people want out of their news aggregator I guess. This seems pretty low on the priority list for most.

[–]magnora7 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (6 children)

I agree with your assessment here.

One other random thing I noticed in regard to external crawling, I don't think our app traffic is registering on alexa, I wonder if there's a switch you can flip on the app to enable that traffic to register to our main domain name? If it's some horrible project to make that happen then don't worry about it, just a passing thought, definitely not a big deal

[–]d3rr 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (5 children)

flip on the app to enable that traffic to register to our main domain name?

It's very possible that this is doable and not a tech nightmare, I'll think about it.

[–]magnora7 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (4 children)

Okay, cool. I wouldn't put more than an hour or so in to it. If it takes longer than that, I wouldn't worry about it.

[–]JasonCarswell 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

A site map sounds good.

It may be good for statistical analysis too. Whether it's for all that nonsense I was talking about with subs or maybe for future documentation on the SaitIt article - but more importantly for the internal stats awareness, searches, navigability, and crawl-ability - and likely many other things.

Perhaps, including how to chop it up for decentralization distribution.

You'd think there'd already be a ready to go universal open-source search-crawler to plug into any site.

https://duckduckgo.com/?q=open+source+search+engine+software

[–]Mnemonic 3 insightful - 1 fun3 insightful - 0 fun4 insightful - 1 fun -  (1 child)

I know from almost 20 years ago you can set up your site to allow or deny their spiderbots

No, you can't. robot.txt only stop 'lawful' ones. Blocking spiders the hard way is... hard (if you click 20 posts in new tabs to read, you can be a spider to an arbitrary spider-blocker, if not I {unlawful spiderbot} will just search up to 20 articles posted every [insert certain amount of time]).

[–]JasonCarswell 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Right. That's what I meant.

I wouldn't know how to deal with spiders I didn't want.

Looking back I wonder if that robot.txt was a trick. "Don't look at my private folder. No. Don't do it. It's right there where I'm indicating, but don't look."

But I was not talking about blocking them. I was trying to invite them to scan SaidIt 1) so we could search them and 2) so the more content we feature the broader their set gets and the vaster SaidIt seems/is (compared to a site with 3 pages) and 3) so the diversity of content may also meet more searchers.

Maybe this is naive meat-think and the Al Gore Rhythms are nothing like my fanciful wists.