I am working on a list which displays a large number of contacts (400 to 500 for a typical user). Currently, I am using Dojo (customized widgit) which is created 400 times (once for each contact).This of course is resulting in alot of rendering delay. What is the best approach to display large lists in HTML/javascript? Each list item needs to have an image.
I have a website using jQuery that for the most part works fine, but it contains a very large form with a lot of fields. I have an option to save the settings from the form and to load and retrieve them from the server. The problem I'm running into is that the loading settings involves changing so much in the DOM (the form is huge and contains a lot of fields) that it seems to be freezing up on some browsers or timing out. I can't reproduce this on my computer (although it does take awhile to finish processing) but I've gotten enough reports of the problem that I'm looking for some advice as to how I can speed it up some.
The site is [URL] I've tried to speed it up by caching a good chunk of the selectors I'm interacting with and I'm using IDs to access the fields in most circumstances. But I'm not sure what else I can do to really optimize the loading. how I can go about improving on the load speed?
I'm building on-the-fly <select> lists from JSON data fetched from the server. Some of then include a large number of items (>20,000).
The SQL and HTML parts are working fine. The AJAX script fetches data fairly quickly (around 1 second) and large selects are not a problem once they're built (the browser handles them nicely, even IE). The bottleneck is in the process of picking the JSON data and building the <option> tags. That can take a full minute.
What's the recommended (i.e. fastest) method to generate a large <select> list?
My current approach is this:
// Fetch data (GET method allows me to use browser cache) $.get(url, get, function(jsonValues, txtStatus){ that.values = jsonValues; }, "json");
<!-- hide from browsers that dont support js if(document.images) { about_over = new Image about_over.src = "images/btn_about_r.gif" about_out = new Image about_out.src = "images/btn_about.gif"
success_over = new Image success_over.src = "images/btn_success_r.gif" success_out = new Image success_out.src = "images/btn_success.gif" } // -->
and i am using it from within the html code in a standard way:
question: i don't know much about javascript. i know that the script above preloads the images but i don't understand why the images reference names ( "about_over", "about_out", etc) do not get utilized within html code. it seems to me that they should be used.
I've written a function that "condenses" a string if it is too long.
function shortenMsg(msg,maxLen){ if (msg.length > maxLen){ var over = msg.length - maxLen, // amount that needs to be trimmed
[Code]....
Each of the methods 1-3 is the amount that that specific method can trim from the string. I'd like to be able to trim as little as possible.
For example, if the string needs 5 characters to be trimmed, and method1 can trim 8 characters, but method3 can trim 6, then method3 should be used. If none of the methods can individually trim the string enough, then I'd like the optimal combination of the methods that will get the job done.
I can't figure out what sort of code structure I need for this (besides a ton of if/else statements). Maybe an array that contains each of the methods, arranged in increasing order....?
I need to move the entire contents of one div to a sibling div. At present I'm just doing (assuming the 2 divs are called 1st and 2nd):
What I need to know is if this is the quickest means (in performance terms) of doing this as I will be performing the operation regularly and on a large number of nodes and it's in an area where the UX really can't stutter ?
I have some code which creates an extremely long table row, and I've been able to clean it up to a point where my performance is fairly decent. What I am trying to figure out is if its better in terms of speed to use divs as opposed to the really long table row. I didn't really find much on this topic online, so thought I'd ask out here.
I have read many of the copius entries on the subject of IE performance (or the lack thereof) when populating Select Lists.
I don't mind the insert performance so much, (I get 100x120byte rows inserted/sec up to 500, and 100rows/6secs up to 3000, which isn't great but then the Row Count is clicking away for the user to see and they can hit the "cancel" button at anytime, so overall I'm happy), what really disappoints me is the woeful of .REMOVE()!
Before fetching the next result-set I clear down the existing options (I *do* have to do this don't I?) by looping through option collection calling remove(1). (Would it be quicker if I removed the last option? Option[0] is a header.) For 3000 rows this takes an unbelievable 20+secs :-( Does this sound about right?
1) Is it only IE that performs badly on this?
2) Is there a quicker or more efficient way of zeroing the Select List?
2a) The w3schools ref says the "length" attribute "Returns the number of options in a dropdown list" it doesn't say "sets Or returns"
2b) The French guy (Stephane?) suggested that I should just set the length to zero, but wouldn't that result in a memory leak?
3) Do I need to create a malloc/realloc function that keeps a high-water mark of available option objects for this Drop Down and only "new" some more options when that's exceeded? (But then the Length would always be off)
I just tested all my jQuery selectors using the jQuery Tester [url], and the results seem to "contradict" one thing I read in a performance article: that you should descend from the closest parent ID when using classes in your selector (the article says "April 09", so the latest jQuery version was already available). In my tests, using just the class selector (like span.myClass) was always fastest (sometimes twice as fast as #myDiv span.myClass), and this in all browsers I tested, not just the ones supporting getElementsByClassName. Maybe descending from the closest parent ID becomes a factor when you have a lot of elements on you page?
I pull XML from server using .load() and then iterate with .each() over some 3000 nodes. I use .find() to get 7 sub-nodes and store them internally (into arrays). It works, but it is disappointingly slow. On my obsolete P4 it can take 8-10 seconds during which the whole browser (FF) is completely frozen. On faster computers the processing time is shorter, but still way too long. What can I do to cut this time? I certainly need speed up of an order, two orders would be nice. Would JSON be any faster? Or should I pull text/plain in custom format and parse it in my JS code?
I have a php search page with can potentially display several hundred records. For each record, there is an icon which, when clicked, makes an ajax call. When the reply comes back, the text returned from the server script is added to a specific div and the source of the icon that the user clicked is changed (as a visual cue that that particular item was selected).
This works 100% perfect in FireFox (3.5.9), Chrome, and IE 7. However when I test it in IE 8 there is a HUGE lag between when the icon is clicked and when the div and icon are updated (usually between 10-15 seconds). By commenting out one line at a time, I've narrowed it down to the line that changes the src attribute of the icon...if I just comment that line out, the ajax call is made and the div is updated instantaneously.
How to improve the web site loading performance. My current site takes average time 18 sec. to load in first time. and 2nd time refresh it takes 12 sec. through YSlow I am observing the request time it more. how to achieve the better performance. My html code is very much clean and w3c validated.
I've been working on a redesign of our site at ExperiencePlus for some time now, and long ago chose CBE menu 9 over the other menu technologies out there because of its browser independance. Problem is, as you can see, we have a pretty large site; load-times for the menus and associated scripts are approaching prohibitive. So I'm trying to speed things up.
You can see the results of some simplification here - still about the same speed by my guesstimates.
So, my question is twofold, I guess. First, Mike, do you have any ideas about how long it will take X menu 4 to reach maturity? No pressure ;^) If it were ready now, I'd just drop CBE in favor of X.
Second question: How much performance improvement can I expect from removing unnecessary code (sliding, for example) from the CBE core files? I haven't played with that stuff at all, except to read it now & then when looking for solutions to problems. Does anyone have a similarly large implementation of CBE menu9 that runs faster, so that perhaps they could share their experience?
One final thing: I'm planning to eventually shove all this into a PHP document that will auto-generate chunks of the menu from database queries, especially around the tour & country listings and our "Resource Room." (X menu 4 looks like it would be vastly superior for that purpose, since it's so lean.) I'm interested in hearing from anybody who's tried to do something like this, whether they succeeded or not.
I have client that has 5 versions of the same site located in web viewable root folders on his server. Aside from a few minor differences such as prices, download url's and a few text and image differences, they're the same.
Just wanted to get some opinion as to how many javascript includes I can, or should, use on the site pages or if there are any strong opinions on not doing it this way.
I'd like to place a set of javascipt files in a folder within each site, then have all pages in each site call to their specific include folder. This way I'll be able to use a single set of DW templates to manage the content on all the sites.
I can't convert to php, use ssi nor create a dynamic solution since the sites are already live and rank well in the search engines, The content I'll be wrapping in the includes is not important search engine text content.
I am developing a project. to calculate a key performance index (KPI) using javascript and HTML. the calculation should be in client side, and it calculate automatic after user input the value.i am very new with javascript, and i need help from all frens here..[URL]
Method 2: $("<div />").attr("id", "myId").addClass("one two three").width(100).height(100).css("z-index", 1).appendTo("body");
I imagine that when using the first method, jQuery does some string processing and eventually ends up doing the same thing as method #2. Is that correct? If so is there a significant performance cost for this? Overall I think the first method is better as far as readability goes but it would be good to also know its effect on performance.
This questions mainly regards using google's analytics code on some of our websites. We currently place the code at the footer as it can hinder load times if placed further up in the page.
For this, or in general use, is placing javascript in the BODY or HEAD better for one or the other as far as load times? Can placing scripts in one or the other allow the page to load concurrently with the script and not sequentially?
I have a nice javascript slideshow but it kills the rendering speed of my home page. According to Yahoo performance guru(s), javascript gets run before other stuff is rendered, so you fix that by putting the js code "at the end" of the html file.
Putting it at the end puts the slideshow at the bottom which is not the desired result. And even abs positioning is slave to the <div> structure.
How do I nullify the flow just for this one thing (I don't want to make the whole page absolute).
I am trying to complete a javascript application and am having problems with code similar to that show below.
Much testing has shown that Firefox finishes the code shown in around 0.25 secs but Internet Explorer 6 takes a massive 3.5 secs! Internet Explorer 7 gets it down to around 2 seconds - but that's still 8 times slower than Firefox and way unacceptable for my userbase.
Looking through the newsgroups there is some discussion around the differences between the way the two browsers handle arrays - but a performance differential such as this is just unbelievably dismal.
Unfortunately I need to continue to use arrays of objects and have to support the Internet Explorer client base. I have already added specification of the array size and also removed the use of array "push"ing - flattening the array is not really an option. Code:
On our website we wanna bind a click (or maybe mouseover)-event on every user-image. The click on the image should open a layer with further information about the user. Now i look for a best practice way to solve this (focus on performance), because there could be a lot of user-images on one side. I think, if i bind the event on a class like this
that could slow down the site, because i read, that "The class selector is the slowest selector in jQuery". Back to the roots and insert an onclick(function) to the element, but i'm not realy happy with that solution.
There's no native linked list implementation in JS. I'm wondering if it would be worth it to implement one.
I'm using a lot of insertions and deletions with arrays of around length 5. How fast are insertions and deletions in JS native arrays compared to an optimized (but not native) linked list implementation in this situation? How about arrays of length 10?
I have got a div-container the size of the window itself. So its relativly big.This in turn has a child-div-container which is substantially larger.This 2nd div-container is absolutely positioned in the first and shall now be scrolled using the mouse. Because for this Project i don't want scrollbars.If the mouse moves to the edge of the outer div, the inner div should move in the appropriate direction.For that the first container has a MouseMove-listener and depending on how close the mouse is at the edge, a scrolling-speed-variable is set.Sidenote: the speed has not a linear but a quadratic increase. The moving itself is not the problem, but the calculation.Because of the quadratic increase in speed the calculation is rather expensive.The question is now whether it would be more performant if i create two arrays (for x- and y-axis) in which I store the velocities for each pixel, or whether I re-calculate the speed for each movement.
That would mean, at a window size of 1200x700px I had two arrays. One with 1200 fields ald values and another with 700. And thats a relativly small resolution.In this way the calculation must be performed only once. After that I only need to read the velocities out of the arrays.
I'm trying to build a page that has multiple ajax calls on it. When you do it the old-fashioned way with XmlHttpRequest, you'd create a new xhr object for every call so that they execute simultaneously. If I try to do this in jquery it will only execute a call when the previous one has completed. This makes the page load time completely unacceptable. How to improve the performance?
i am interested in using a popup to show flash games in it, so i did then my interest went to make all games playable in full screen so i did that too and got success, but i am facing issues due to wmode, if you visit my this site link http:[url].....and click on play game for other browsers while using google chrome browser, you will see the error there will be blocks appearing in the game, while if click on play for google chrome then this error is not there, its just because of WMODE, for google chrome button i am using wmode while for all other browsers i am not doing this.the reason for not using wmode to my other browser play button is that if i add wmode to the games then the performance of the game is highly damaged in fireforx and internet explorer, and also to some extent in other browsers including google chrome.it is that i want to get rid of this wmode=opaque thing as it makes the game slow even in google chrome which is effecting game playing quality, i used window,transparent too but nothing good came out of it.some more information i would like to provide so that things work fast, i am using blogger blog, i have knowledge of html,xhtml,css, and for java or jquery its hint and trial.